Is it possible to have dual time zones? - Amazfit

Hello everyone. I recently had my amazfit and I travel a lot. Is it possible to have a watchface with dual time zone or two watchface with different time?
I read a lot but I can not find anything on that subject

fjvilla said:
Hello everyone. I recently had my amazfit and I travel a lot. Is it possible to have a watchface with dual time zone or two watchface with different time?
I read a lot but I can not find anything on that subject
Click to expand...
Click to collapse
There is no direct support for it. But hope Huami provides one soon.
There is a work around. One option I found was to a have watchface with with digital and analog. The digiyal would be your base timezone. The analog can be set to a different timezone by rotating the hours and minutes by the corresponding time offset. If the other timezone is 2:30 minutes ahead, ur base hour hand should be rotated by 75 degrees and minutes by 180 degrees.
If know a little bit is PS or Gimp, it is very easy.
I was going to create a simple watch face with 25 different time offset. Yet to find some time.

madtech360 said:
There is no direct support for it. But hope Huami provides one soon.
There is a work around. One option I found was to a have watchface with with digital and analog. The digiyal would be your base timezone. The analog can be set to a different timezone by rotating the hours and minutes by the corresponding time offset. If the other timezone is 2:30 minutes ahead, ur base hour hand should be rotated by 75 degrees and minutes by 180 degrees.
If know a little bit is PS or Gimp, it is very easy.
I was going to create a simple watch face with 25 different time offset. Yet to find some time.
Click to expand...
Click to collapse
Very good idea. But it's sad that the amazfit people have not thought about this.

fjvilla said:
Very good idea. But it's sad that the amazfit people have not thought about this.
Click to expand...
Click to collapse
The watch software is still very much in active development so I suspect this will come in the near future. The Chinese watches already have many other advanced features. While it's difficult waiting for them - at least we know that the features have been through a lot of user testing before coming to the US.

kwoodall said:
The watch software is still very much in active development so I suspect this will come in the near future. The Chinese watches already have many other advanced features. While it's difficult waiting for them - at least we know that the features have been through a lot of user testing before coming to the US.
Click to expand...
Click to collapse
I know it has very advanced features, but I think it's something basic

madtech360 said:
There is no direct support for it. But hope Huami provides one soon.
There is a work around. One option I found was to a have watchface with with digital and analog. The digiyal would be your base timezone. The analog can be set to a different timezone by rotating the hours and minutes by the corresponding time offset. If the other timezone is 2:30 minutes ahead, ur base hour hand should be rotated by 75 degrees and minutes by 180 degrees.
If know a little bit is PS or Gimp, it is very easy.
I was going to create a simple watch face with 25 different time offset. Yet to find some time.
Click to expand...
Click to collapse
Brother can you provide some steps about how to do it in json
Code:
"AnalogDialFace": {
"Hours": {
"OnlyBorder": false,
"Color": "0x0000FF",
"Center": {
"X": 54,
"Y": 59
},
"Shape": [
{
"X": 14,
"Y": 1
},
{
"X": 14,
"Y": -1
},
{
"X": 34,
"Y": -2
},
{
"X": 36,
"Y": 0
},
{
"X": 34,
"Y": 2
}
]
},
"Minutes": {
"OnlyBorder": false,
"Color": "0x0000FF",
"Center": {
"X": 54,
"Y": 59
},
"Shape": [
{
"X": 14,
"Y": 1
},
{
"X": 14,
"Y": -1
},
{
"X": 34,
"Y": -2
},
{
"X": 36,
"Y": 0
},
{
"X": 38,
"Y": 0
},
{
"X": 38,
"Y": -1
},
{
"X": 50,
"Y": -1
},
{
"X": 50,
"Y": 1
},
{
"X": 38,
"Y": 1
},
{
"X": 38,
"Y": 0
},
{
"X": 36,
"Y": 0
},
{
"X": 34,
"Y": 2
}
]
}
}

@madtech360 brother any help is much appreciated

shreyanshp said:
@madtech360 brother any help is much appreciated
Click to expand...
Click to collapse
I am not sure how you would do it in JSON. I found a watchface implementing dual timezone using the approach that I had mentioned above:
https://amazfitwatchfaces.com/pace/view/?id=718
https://amazfitwatchfaces.com/pace/view/?id=719
Unfortunately the creator had provided only timezone differences by the hour (like 1h, 2h, 3h..) and not provided 30 min increments (like 1:30, 2:30, 3:30..). I modified the timehands for the 30 min increments as well and updated the previews also for a better experience. (Attached. Rename .zip to .wfz)
The way to use dual timezone, is set the other timezone considering your timezone as the baseline. For ex, if you are based out of Paris, your digital time will always be Paris time (synced with your watch). For the Analog (second timezone), if you want London time, you select the 11h (-1) offset. If you need Jerusulam time as the second time, you should select 1h (+1) offset.
All credits to the Original creator.

madtech360 said:
I am not sure how you would do it in JSON. I found a watchface implementing dual timezone using the approach that I had mentioned above:
https://amazfitwatchfaces.com/pace/view/?id=718
https://amazfitwatchfaces.com/pace/view/?id=719
Unfortunately the creator had provided only timezone differences by the hour (like 1h, 2h, 3h..) and not provided 30 min increments (like 1:30, 2:30, 3:30..). I modified the timehands for the 30 min increments as well and updated the previews also for a better experience. (Attached. Rename .zip to .wfz)
The way to use dual timezone, is set the other timezone considering your timezone as the baseline. For ex, if you are based out of Paris, your digital time will always be Paris time (synced with your watch). For the Analog (second timezone), if you want London time, you select the 11h (-1) offset. If you need Jerusulam time as the second time, you should select 1h (+1) offset.
All credits to the Original creator.
Click to expand...
Click to collapse
Thanks bro!
However I think this is for Amazfit Stratos, I was talking about Amazfit Bip
I am still trying to do it for Json, the angles are a bit weird in Bip as it is not a circle, due to which I have to go through a lot of trial and error for getting angles/time difference right, specially for +-30 minutes offset, will definitely post when I finish at least one watch face
Do you have any tips/inputs for Amazfit bip?

shreyanshp said:
Thanks bro!
However I think this is for Amazfit Stratos, I was talking about Amazfit Bip
I am still trying to do it for Json, the angles are a bit weird in Bip as it is not a circle, due to which I have to go through a lot of trial and error for getting angles/time difference right, specially for +-30 minutes offset, will definitely post when I finish at least one watch face
Do you have any tips/inputs for Amazfit bip?
Click to expand...
Click to collapse
I have no idea about Bip and its watchfaces. But did you check the site https://amazfitwatchfaces.com/bip for Dual?
I found quite a few BIP watch faces with dual timezone.
https://amazfitwatchfaces.com/bip/view/?id=7826
https://amazfitwatchfaces.com/bip/view/?id=7825

apk faces for the pace should be able to do dual time zones.
apk faces can do anything

madtech360 said:
I am not sure how you would do it in JSON. I found a watchface implementing dual timezone using the approach that I had mentioned above:
https://amazfitwatchfaces.com/pace/view/?id=718
https://amazfitwatchfaces.com/pace/view/?id=719
Unfortunately the creator had provided only timezone differences by the hour (like 1h, 2h, 3h..) and not provided 30 min increments (like 1:30, 2:30, 3:30..). I modified the timehands for the 30 min increments as well and updated the previews also for a better experience. (Attached. Rename .zip to .wfz)
The way to use dual timezone, is set the other timezone considering your timezone as the baseline. For ex, if you are based out of Paris, your digital time will always be Paris time (synced with your watch). For the Analog (second timezone), if you want London time, you select the 11h (-1) offset. If you need Jerusulam time as the second time, you should select 1h (+1) offset.
All credits to the Original creator.
Click to expand...
Click to collapse
Hi,
unable to download. Can you please make it available?
Need white one..
Rgds,
Niranjan

niranjandandekar said:
Hi,
unable to download. Can you please make it available?
Need white one..
Rgds,
Niranjan
Click to expand...
Click to collapse
Download is working fine, maybe it was a temporary server issue where the file was hosted. Anyway, GreatFit APK watchface also have the option of dual time zones as far as I know, maybe you want to check it out.

@madtech - The ".zip" attachment seems to have disappeared. Any way of getting it back. Thanks a ton!

rudisubscribes said:
@madtech - The ".zip" attachment seems to have disappeared. Any way of getting it back. Thanks a ton!
Click to expand...
Click to collapse
Downloads from AmazfitWatchfaces.com are working fine, but they are not ZIP but WFZ files...

watchface for -0345 time difference.
i have made one for -0345 time difference. Couldnot upload here the file. the dial arrow of analog could be plotted in autocad or any graphics software and rotated by angles relative to time difference desired to obtain new coordinates and update them into the jason file. it works.

Related

[QUESTION] Audio Boost Limits & RGB565/RGB666

Hi all, as you may or may not know from my recent posts, I've been trying my hand at kernel development. I have some questions though.
What are the limits to audio boost? I have mine boosted up to 2500 max & -500 min & everything runs fine & I get a HUGE boost over the normal boost of -1500 to 1100. Why aren't all kernels built like this? Obviously stupid values like 100,000 wont work, but why not ~2000-2500 on audioboost kernels? Why not 3000? What is the safe level? Just wondering what the reasoning behind the oddball 1100 max is on most kernels.
Next, most kernel devs know about the RGB666 (more color depth support) for the CDMA revisions in board-mahimahi-panel.c. I looked through the code, and other than the revision checker that decides whether to use the RGB666 or RGB565 init tables, there is only one line of code different between the two panels and thats in the init table itself:
Code:
static struct lcm_tbl samsung_oled_rgb565_init_table[] = {
{ 0x31, 0x08 },
{ 0x32, 0x14 },
{ 0x30, 0x2 },
{ 0x27, 0x1 },
{ 0x12, 0x8 },
{ 0x13, 0x8 },
{ 0x15, 0x0 },
[B][COLOR="Red"]{ 0x16, 0x02 },[/COLOR][/B]
{ 0x39, 0x24 },
{ 0x17, 0x22 },
{ 0x18, 0x33 },
{ 0x19, 0x3 },
{ 0x1A, 0x1 },
{ 0x22, 0xA4 },
{ 0x23, 0x0 },
{ 0x26, 0xA0 },
};
static struct lcm_tbl samsung_oled_rgb666_init_table[] = {
{ 0x31, 0x08 },
{ 0x32, 0x14 },
{ 0x30, 0x2 },
{ 0x27, 0x1 },
{ 0x12, 0x8 },
{ 0x13, 0x8 },
{ 0x15, 0x0 },
[COLOR="Red"][B]{ 0x16, 0x01 },[/B][/COLOR]
{ 0x39, 0x24 },
{ 0x17, 0x22 },
{ 0x18, 0x33 },
{ 0x19, 0x3 },
{ 0x1A, 0x1 },
{ 0x22, 0xA4 },
{ 0x23, 0x0 },
{ 0x26, 0xA0 },
};
That is the ONLY difference, even between the gamma level settings. Does that ONE line have that much of an effect as to increase color depth that much?
Anyway, I've removed the version check code and set it to always use the RGB666 init table. The kernel runs & boots fine, all the graphics run great still. I've done some admittedly very subjective tests with images that I KNOW on a stock kernel have severe banding in Gallery3D, and on my kernel, the banding is almost, if not entirely gone. It is a noticeable improvement, but without side-by-side tests that I simply can't do, it's not very scientific at all.
Can any experts weigh in? Sorry if it seems like a lot of questions, I'm just trying to figure this out so I can attempt to contribute something lol
where did you test the images? i thought the banding was due to the 16bit colour depth coded into the 3D gallery app
Somebody boost the ringtones volume please
There are certainly a solution if the ''incall volume'' has been already boosted
vegetaleb said:
Somebody boost the ringtones volume please
There are certainly a solution if the ''incall volume'' has been already boosted
Click to expand...
Click to collapse
+1 pls, it rings with ridiculously low volume
I have to agree with this.
In Call Volume NOOB
I recently upgraded from Cyanogen 5053 to 506, and I noticed that the bluetooth volume in call is super loud, so that i can't understand what's being said. Is there a way to reduce the incall volume through a nice GUI? or is there another way to lower the incall volume?
the range for qdsp6 audio are:
+1200 to -4000
after some testing, i determined +1100 to -1500 was a good trade-off between too loud and too soft, and that within which is adjustable via volume rocker.
pershoot said:
the range for qdsp6 audio are:
+1200 to -4000
after some testing, i determined +1100 to -1500 was a good trade-off between too loud and too soft, and that within which is adjustable via volume rocker.
Click to expand...
Click to collapse
Ahh OK thanks. Good 'ol placebo effect lol.
Also, i had known there were problems with Gallery3D being 16bit, but a stock nexus kernel only has support for RGB565 on GSM/UMTS devices, but CDMA revisions allow for RGB666. What effect does this have on us with only RGB565? Is hardcoding RGB666 values unsafe? What effect will it have since the color depth issue is just in Gallery3D?
Hmm, looking at the code where it sets the absolute min/max values, would it be possible to change the values here to allow higher highs? I.E:
Code:
int q6audio_set_stream_volume(struct audio_client *ac, int vol)
{
if (vol > 1200 || vol < -4000) {
pr_err("unsupported volume level %d\n", vol);
return -EINVAL;
}
mutex_lock(&audio_path_lock);
audio_stream_mute(ac, 0);
audio_stream_volume(ac, vol);
mutex_unlock(&audio_path_lock);
return 0;
}
changed to say:
Code:
int q6audio_set_stream_volume(struct audio_client *ac, int vol)
{
[B][COLOR="Red"]if (vol > 1500 || vol < -4000) {[/COLOR][/B]
pr_err("unsupported volume level %d\n", vol);
return -EINVAL;
}
mutex_lock(&audio_path_lock);
audio_stream_mute(ac, 0);
audio_stream_volume(ac, vol);
mutex_unlock(&audio_path_lock);
return 0;
}
Theoretically, you could probably change the volume limit levels to whatever you want. I know on the G1 you can carry on going until the speaker physically breaks, although it is done in a different way.
Meltus said:
Theoretically, you could probably change the volume limit levels to whatever you want. I know on the G1 you can carry on going until the speaker physically breaks, although it is done in a different way.
Click to expand...
Click to collapse
Interesting. How high do you think we can safely push the Nexus, or do you have no idea? Im kinda afraid to just experiment cuz I'm afraid to break my speaker,but if you have no idea I will try.
Geniusdog254 said:
Interesting. How high do you think we can safely push the Nexus, or do you have no idea? Im kinda afraid to just experiment cuz I'm afraid to break my speaker,but if you have no idea I will try.
Click to expand...
Click to collapse
I have absolutely no idea i'm afraid, i don't know how to increase the volume properly, but i'm trying to figure it out (no idea if i'll be successful).
I came close to breaking the main speaker on my G1 when i first starting experimenting with the values, but as long as you're careful you should be ok lol
Geniusdog254 said:
Ahh OK thanks. Good 'ol placebo effect lol.
Also, i had known there were problems with Gallery3D being 16bit, but a stock nexus kernel only has support for RGB565 on GSM/UMTS devices, but CDMA revisions allow for RGB666. What effect does this have on us with only RGB565? Is hardcoding RGB666 values unsafe? What effect will it have since the color depth issue is just in Gallery3D?
Click to expand...
Click to collapse
Regarding RGB565, a Google-employed Android project kernel hacker who on occasion visits this forum (swetland) confimed that the physical wiring to the AMOLED panel is only RGB565/16-bits. The one-bit change you mention is in the panel configuration data and probably tells the panel to expect data on those extra two input bits. They are probably the least significant bits, and tied low in hardware which is why you don't notice it not working. I doubt there is any chance of harm, but there is also zero chance of it improving the panel quality. The extra bits simply aren't connected.
(There are also some more changes for RGB666 in the mdp driver that you seemingly didn't notice... these are less likely to be safe as they control the extra bits coming out of the QSD8250. These pins might be used for something else and it's difficult to know if it's safe without both the QSD8250 programming manual (not public) and the PCB schematics. )
Hi, i've seen many people complaining about audio boost to get higher volume through the phone but very few looking at bluetooth headsets.
Audio boost makes the volume through these ones too high, in the end the quality is very bad, audio results overboosted and noisy.
Do you know if there is any way to separate built-in speaker and BT parameters?
ya how bout some you guys try a hearing aid

[Q] Help developing a looping video live wallpaper

I'm a very new Java developer, having started learning it last month specifically for this project. I have fifteen years of development experience, but it has almost all been web related (html, JS, JQuery, ColdFusion, etc), so this is a major paradigm change that I'm having trouble wrapping my head around.
Anyway, I'm attempting to create a movie-based live wallpaper to sell on the app store. I have a 15 second mpg (or 450 png frames) derived from some rendered artwork I did, the bottom 35% of which has motion (the rest remains relatively static). I'd like code flexible enough to handle future animations as well, though, as I just rediscovered Vue and may do other videos where the entire frame has motion.
My initial attempts are detailed on my Stack Overflow question at: (link removed due to forum rules; findable with the title: How do you create a video live wallpaper).
That post, in short, boils down to having tried these different approaches:
Load frames into a bitmap array and display on canvas in loop; excellent FPS but hundreds of MB of memory use.
Load frames into byteArray as jpgs and decode during display; clocking in at only 10 FPS at 60% cpu usage on powerful hardware, but with good memory usage.
Load tiled sprite with all 450 frames in AndEngine as a texture and display; went oom while trying to allocate 200 MB of memory.
AndEngine again. Load tiled jpg with 10 frames into sprite, load next tiled jpg into a second sprite, every 400ms hide one sprite and display the second, then load the upcoming jpg into the hidden sprite; rinse, repeat. Attempting to decode in a makeshift buffer, essentially.
I feel like maybe method 4 has promise and am including the code I'm using below. However, every time the sprites are swapped out the screen freezes for as long as a second or two. I tried adding timers between every line of code to determine what's taking so much time, but they almost always come back with barely a millisecond or two taken, leaving me confused about where the freeze is occurring. But I don't understand AndEngine well yet (or even Java) so I may be doing something completely boneheaded.
I'd welcome any thoughts, whether a refinement on an existing method or a completely new idea. I've had a horrible time trying to find tutorials on doing this, and questions I find here and on SO generally don't offer much encouragement. I just want to get this thing finished so I can concentrate on the heart of this project: the art. Thanks!
As an aside, how much work would this be (ie: how much would it cost) for an experienced developer to create a template for me? I wouldn't mind paying a small amount for something I can keep using with future animations.
Code:
public void onCreateResources(
OnCreateResourcesCallback pOnCreateResourcesCallback)
throws Exception {
scene = new Scene();
initializePreferences();
// Water
waterTexture = new BitmapTextureAtlas(this.getTextureManager(), 1200, 950, TextureOptions.BILINEAR);
waterRegion = BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(waterTexture, this.getAssets(), "testten1.jpg", 0, 0, 2, 5);
waterTexture.load();
waterTexture2 = new BitmapTextureAtlas(this.getTextureManager(), 1200, 950, TextureOptions.BILINEAR);
waterRegion2 = BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(waterTexture2, this.getAssets(), "testten2.jpg", 0, 0, 2, 5);
waterTexture2.load();
water = new AnimatedSprite(0, 0, waterRegion, this.getVertexBufferObjectManager());
water2 = new AnimatedSprite(0, 0, waterRegion2, this.getVertexBufferObjectManager());
scene.attachChild(water);
water.animate(40);
mHandler.postDelayed(mUpdateDisplay, 400);
}
private final Handler mHandler = new Handler();
private final Runnable mUpdateDisplay = new Runnable() {
[user=439709]@override[/user]
public void run() {
changeWater();
}
};
public void changeWater() {
mHandler.removeCallbacks(mUpdateDisplay);
mHandler.postDelayed(mUpdateDisplay, 400);
if (curWaterTexture == 1) {
Log.w("General", "Changed texture to 2 with resource: " + curWaterResource);
curWaterTexture = 2;
scene.attachChild(water2);
water2.animate(40);
scene.detachChild(water);
curWaterResource = curWaterResource + 1;
if (curWaterResource > 4) curWaterResource = 1;
String resourceName = "testten" + curWaterResource + ".jpg";
waterRegion = BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(waterTexture, this.getAssets(), resourceName, 0, 0, 2, 5);
waterTexture.load();
water = new AnimatedSprite(0, 0, waterRegion, this.getVertexBufferObjectManager());
} else {
Log.w("General", "Changed texture to 1 with resource: " + curWaterResource);
curWaterTexture = 1;
scene.attachChild(water);
water.animate(40);
scene.detachChild(water2);
curWaterResource = curWaterResource + 1;
if (curWaterResource > 4) curWaterResource = 1;
String resourceName = "testten" + curWaterResource + ".jpg";
waterRegion2 = BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(waterTexture2, this.getAssets(), resourceName, 0, 0, 2, 5);
waterTexture2.load();
water2 = new AnimatedSprite(0, 0, waterRegion2, this.getVertexBufferObjectManager());
}
}

SDK or NDK w/o OpenGL in app to save battery?

I would like to develop and app, which has the main functionality to show a 2D chart. There are others and settings, but this is the main.
Existing similar applications aren't using technologies as how it should and they are releasing apps with to few features or they are draining battery like crazy. This I would like to improve it.
This is for web / desktop
View attachment 2397770
Here are a few sentences from my experience, please correct me if I am wrong!
Many-many of implementations are using SDK and 2D graphics. They draw the whole screen every time. Just sere how your battery drained in 1-2 hours! - some really bad implementation they can drain in 3 minutes, I will show later. One exception is: Metatrader 4: it use NDK and OpenGL, Hey pay me, because I am advertising you!
I think the SDK code (.class files) it need to e compiled at runtime execution by virtual machine, which takes CPU usage.
NDK is compiled only one => will use less CPU => will drain less batter as the SDK version.
If this is correct, than it should move more code to NDK if is possible. Not sure about JNI calls overhead!
NDK huuuuge advantage: it will allow to use the phone max available memory ( 1.5GB was last time) instead of SDK max allowed value: 256MB ( in my case, but some devices allow only 24Mb for SDK apps)
If you don't touch this chart than nothing changes, if you set the "Live" option, than the last "candle" will change. The candles are those Red & Green rectangles with the line in upper side and down side with meaning of the open, high, low, close prices. - it is related to business logic, but this special bars can have those 2 lines, which are important. The border of the 1pixel black around the red-green rectangles are for visual effects only. To save battery can be omitted.
(The background text and logos are just for advertisements, nobody cares, not really needed for the user for his actions)
As you can see there is a toolbar, and there are maybe other components too, but when the user want to see highest amount of data than will rotate to landscape and there should be given a Full Screen (in portrait not by default)
I have started the development with a component for rendering with OpenGL ES 2.0.
The UI is in SDK , the renderer there is called, but the method implementation calls the native methods via JNI. The business logic, rendering implementation are all in NDK, which is a native shared library.
Not sure why I started with OpenGL, to have some cool effects, who want to drain his battery faster? or I tough if is faster than will consume less battery? - maybe faster drains, but more: 100 mA from GPU for 0.01 seconds rendering is the same as 10 mA for 0.1 rendering at CPU.
Here I am not sure if I am saving energy. Tell me your opinion!
So I have started learning OpenGL ES 2.0 and all what I saw in tutorials it was triangle triangle triangle, but in this chart there is no triangle, but rectangle, or rectangle+lines or just lines with set width or many triangles?
Here is a cool candle, I would like to see this, but I know the gradients will burn battery.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Here is a bit magnified and you can see how I am thinking:
but can't decide. Because I haven't enough experience, don't know the benchmark results. I would like to see a tear down and expected results:
GL_Lines + glLineWidth
GL_triangles
GL_triangle_strip
As the first option with GL_lines:
A candle would be 1x small line from the Low and High price drawn first. Lets say with 5 pixel width to have room for gradient too, if needed. In worse case only 1 pixel, as how they are using others now.
The candle body: the red/green rectangle with has the info of Open/Close price can be another GL_Lines with 50 pixel width - for example.
Will be gradient or not that would depends on user settings. Not sure how is possible to draw black 1-2 pixel of border for candle body in this case - if the user has that settings.
Seconds option with triangles:
2x triangle can make a square.
There is a square as the small line and there is another one the larger square : the candle body.
The triangle strip case:
if only one GL_triangle_strip:
bottomline: 4 vertices
body other 4
upper line 4 vertices
or small line 4x vertices
+
other 4x vertices , so 2x GL_triangle_strip
I made this:
The first thing it was:
Code:
setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY)
It is a rectangle with 2x triangle and a line with width.
The line has less vertices of course, but in this chart is expected to have 50- 200 bars maximum. Minimum 10-20.
What is needed is that cross at mouse location: that dashed line and the numbers ( price ) on the right and some other texts in bottom. X, Y coordinates in text.
Please post your experience, benchmarks, opinion.
Hi matheszabi,
ok, your question was rather compheresive . So right way to find solution is rather complicated too.
I woudn't choose solution according to differences between JDK / NDK to same function calls of OpenGL ES 2.0
According to my experience, performance (and power consumption) is almost the same.
GLOBAL aproach: I suggest
1) you create your application in Java - use JDK versions - creating native function for just few draw calls is, imho, hard compared what you get
2) if your application is NativeActivity
- use direct calls - be prepared for differences between devices, proper creation of surface, .... but it's faster
- subclass Native Activity and use JAVA as well
3) not using OpenGL ES directly at all - i suggest trying layout and buttons / images inside of it, i saw 200 control buttons on 1 screen and it ran perfectly. And layout is movable by 1 command - offset. I suppose, this is completely written in native code anyway. You just get drawing interface in JAVA.
Now, to OPENGL ES itself (JAVA or NDK)
cons:
- OPENGL is 3D interface so pixel perfect drawing is something which is not supported by itself. (note: opengl has pixel / texel center at 0.5 not at 0.0/1.0)
- you need pixel perfect draw - boxes you draw must be perfectly aligned to screen pixels - othewise you get blurry results
- you might not need texturing - thats PLUS, otherwise drawing pixel perfect opengl suraces is "pain in the ..." - viz. texel interpolation and drawing a pixel checker aligned with screen pixels is real nightmare and it differs on each device (I tried )
- pixel perfect with mipmaps - awful again
- alpha blending (when you have something which is transparent) - you will need sorting to draw it properly
- will you need font ? oh my ...
- NO EMULATOR - will need to test each device if it's working properly (not every, at least each vendor)
to consider:
writing vertex/fragment shader in a world with 3500 (number of opengles 2.0 devices according google) suitable devices/compilers/android versions ... ehm
pluses:
- you draw exactly what you want
- use of fragment shader (alhough i suggest using gradient textures instead)
- i suggest draw primitives in vertex buffers (indexed triangles or triangle strips). Not one box by one - compose all in one array and draw it by one draw call. Doubling points in strips can help you
- instancing is not supported) but there is way surpassing this (preparing buffers with)
- i don't recommend using lines with width definition - each implementation draws this differently - and it's equal to drawing lit cylinder
- in native activity loop - handling if you need redraw - it's difficult
I would try normal android GUI first
pluses:
- you get pixel perfect draw, with / without bitmaps, images, gradients ....
- i suppose its fast enough
- no sorting needed
- redraws will be invoked when something changes
- translation / clipping will be handled internally
- no need to care about device !!! that's big plus
cons:
- zoom will be problematic
- rotations as well (but i suppose you don't need them)
note to power consupmption - both OpenGL :
just swaping screen buffers 30 per sec will deplete most of devices within 2-3 hours. But of course, doing it so only when needed is doable
But I might be wrong so best way is to decide by yourself and, of course, try it and see.
If you have qustions about drawing through ES, i'll post some samples of pixel perfect draw (from my gui)
Good luck, post your decision, and if you have some results, post them too! will be very interesting.
PS.
Petr Sovis said:
Hi matheszabi,
ok, your question was rather compheresive . So right way to find solution is rather complicated too.
I woudn't choose solution according to differences between JDK / NDK to same function calls of OpenGL ES 2.0
According to my experience, performance (and power consumption) is almost the same.
GLOBAL aproach: I suggest
1) you create your application in Java - use JDK versions - creating native function for just few draw calls is, imho, hard compared what you get
2) if your application is NativeActivity
- use direct calls - be prepared for differences between devices, proper creation of surface, .... but it's faster
- subclass Native Activity and use JAVA as well
3) not using OpenGL ES directly at all - i suggest trying layout and buttons / images inside of it, i saw 200 control buttons on 1 screen and it ran perfectly. And layout is movable by 1 command - offset. I suppose, this is completely written in native code anyway. You just get drawing interface in JAVA.
Now, to OPENGL ES itself (JAVA or NDK)
cons:
- OPENGL is 3D interface so pixel perfect drawing is something which is not supported by itself. (note: opengl has pixel / texel center at 0.5 not at 0.0/1.0)
- you need pixel perfect draw - boxes you draw must be perfectly aligned to screen pixels - othewise you get blurry results
- you might not need texturing - thats PLUS, otherwise drawing pixel perfect opengl suraces is "pain in the ..." - viz. texel interpolation and drawing a pixel checker aligned with screen pixels is real nightmare and it differs on each device (I tried )
- pixel perfect with mipmaps - awful again
- alpha blending (when you have something which is transparent) - you will need sorting to draw it properly
- will you need font ? oh my ...
- NO EMULATOR - will need to test each device if it's working properly (not every, at least each vendor)
to consider:
writing vertex/fragment shader in a world with 3500 (number of opengles 2.0 devices according google) suitable devices/compilers/android versions ... ehm
pluses:
- you draw exactly what you want
- use of fragment shader (alhough i suggest using gradient textures instead)
- i suggest draw primitives in vertex buffers (indexed triangles or triangle strips). Not one box by one - compose all in one array and draw it by one draw call. Doubling points in strips can help you
- instancing is not supported) but there is way surpassing this (preparing buffers with)
- i don't recommend using lines with width definition - each implementation draws this differently - and it's equal to drawing lit cylinder
- in native activity loop - handling if you need redraw - it's difficult
I would try normal android GUI first
pluses:
- you get pixel perfect draw, with / without bitmaps, images, gradients ....
- i suppose its fast enough
- no sorting needed
- redraws will be invoked when something changes
- translation / clipping will be handled internally
- no need to care about device !!! that's big plus
cons:
- zoom will be problematic
- rotations as well (but i suppose you don't need them)
note to power consupmption - both OpenGL :
just swaping screen buffers 30 per sec will deplete most of devices within 2-3 hours. But of course, doing it so only when needed is doable
But I might be wrong so best way is to decide by yourself and, of course, try it and see.
If you have qustions about drawing through ES, i'll post some samples of pixel perfect draw (from my gui)
Good luck, post your decision, and if you have some results, post them too! will be very interesting.
PS.
Click to expand...
Click to collapse
Hello thanks for sharing your idea experience.
I have asked in other forum a similar question.
It was a response than the GPU has optimised methods so it will have less power consumption
Also he likes to draw the whole frame not only a part - I'm case of my optimisation
Also I have asked an opengl expert and he told me to use the ortho instead of perspective projection. He has right.
Now I don't have bliry results anymore but is pixel perfect!
For benchmark I made 100 candles : 100 line + 200 triangles.
I have calculated how much is the wigth and the height for the surface => how much space can have a candle and used the matrix transformations and render it.
A nice surprise the fist time it took 18 milliseconds but after that only took 5-6 millisec the. Sdk->ndk->opengl-> SDK steps.
That is very good result on MY trash device. But I think it has a trick: the ndk gives order to render to GPU and it will return. So in those 5-6 milliseconds will start to work the GPU for who knows how much....
Note: audio , video , sensor intensive projects should be developed on real devices not at emulators.
With my method the problem is starting with text drawing and leak of knowledge.
Vbo or texture... Not sure which one but if a native 2D can do it than I can me too.
Cairo as rendering 2d engine is a bit big to include in android.
I still think the ndk is the correct place. Also is isn't so easy to watch the engine code as a decompiled SDK code
Ok! you chose your path - good for you!
18/5-6 ms is just for draw (200 triangles+100 lines) ?, or whole frame time ? Can you tell me the the device you are testing your app on ?
Note: Ortho, i didn't mean to advice perspective transform, ortho is still 3d, and pixel perfect - wait for texturing/text drawing, but you'll manage!
Just few tips to font rendering:
bitmap fonts: I can't add links so google: angelcode bmfont
font bitmap creator: google: angelcode bmfont
+ sample: google: nehe 2d_texture_font
freetype implementation - it's possible to compile it with ndk - i use it and it's simple
google: freetype
and code for starters: google: nehe freetype_fonts_in_opengl
Cheers
P.
Petr Sovis said:
Ok! you chose your path - good for you!
18/5-6 ms is just for draw (200 triangles+100 lines) ?, or whole frame time ? Can you tell me the the device you are testing your app on ?
Note: Ortho, i didn't mean to advice perspective transform, ortho is still 3d, and pixel perfect - wait for texturing/text drawing, but you'll manage!
Just few tips to font rendering:
bitmap fonts: I can't add links so google: angelcode bmfont
font bitmap creator: google: angelcode bmfont
+ sample: google: nehe 2d_texture_font
freetype implementation - it's possible to compile it with ndk - i use it and it's simple
google: freetype
and code for starters: google: nehe freetype_fonts_in_opengl
Cheers
P.
Click to expand...
Click to collapse
My phone is a THL W8S. has FHD resolution
It had a mixed layout, but better I show you:
I have misscalculated the lines positions but for now doesn't matter.
The GLSurface takes a considerable amount of the screen space and that is counted.
On a Samsung Note 10" - at least 1 year old device, lower resolution, but better CPU + GPU it is a lot less this numbers.
My device is a trash (by far not for gamming), I told you:
Hi,
ok, as I understand, the time is just for drawing 200 tris + 100 lines - ok that seems a "little to much" for that.
Besides, your device is not trash at all
I don't know if I can share my examples - but scene from app I released week ago draws 50KTris per frame with some heavy shaders (normal mapping + specular ligting from 2 light sources) and draws text over it + particles + GUI (my ndk gles2.0 engine) and I suppose it will run over 30 FPS on your device (its PoverVR 540 or something) (google play: "Asteroid Hunters" by me)
So, can you post fragment of your drawing code ? maybe there something not really right ?
P
Petr Sovis said:
So, can you post fragment of your drawing code ? maybe there something not really right ?
P
Click to expand...
Click to collapse
I will post, but for me this is good enough others are drawing in 200-2000 milliseconds
The SDK part:
Code:
@Override
public void onDrawFrame(GL10 gl) {
long start = System.currentTimeMillis();
LibJNIWrapper.rendererOnDrawFrame();
long end = System.currentTimeMillis();
//Log.e("Renderer", "onDrawFrame() took: "+ (end-start)+" millisec (1e-6)");
}
The related NDK part:
Code:
void renderer_on_draw_frame() //4
{
LOGE("renderer_on_draw_frame");
//copy
static float grey;
grey += 0.01f;
if (grey > 1.0f) {
grey = 0.0f;
}
glClearColor(grey, grey, grey, 1.0f);
checkGlError("glClearColor");
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
int i;
for (i = 0; i < CANDLES_COUNT; i++) {
//LOGE("chartengine rendering i= %d",i);
renderLine((candles[i].coloredSingleLine));
renderRectangle((candles[i].coloredSingleRectangle));
}
glFlush();
}
line rendering:
Code:
void renderLine(ColoredSingleLine* pColoredSingleLine) {
if (pColoredSingleLine == NULL) {
return;
}
//matrixTranslateM(mvpMatrix, rndX, rndY, 0);
//LOGE("###renderLine pColoredSingleLine %p\n",pColoredSingleLine);
glUseProgram(pColoredSingleLine->programColoredLine.program);
checkGlError("Line: glUseProgram");
// Enable a handle to the triangle vertices
glEnableVertexAttribArray(pColoredSingleLine->programColoredLine.a_position_location);
checkGlError("Line: glEnableVertexAttribArray");
// Prepare the triangle coordinate data
glVertexAttribPointer(pColoredSingleLine->programColoredLine.a_position_location, pColoredSingleLine->lineData.COORDS_PER_VERTEX_LINE, GL_FLOAT, GL_FALSE,
pColoredSingleLine->lineData.vertexStride, pColoredSingleLine->lineData.vertices);
checkGlError("Line: glVertexAttribPointer");
// Set color for drawing the triangles
glUniform4fv(pColoredSingleLine->programColoredLine.u_color_location, 1, pColoredSingleLine->lineData.color_rgba);
checkGlError("Line: glUniform4fv");
// Apply the projection and view transformation
glUniformMatrix4fv(pColoredSingleLine->programColoredLine.u_mvp_matrix_location, 1, GL_FALSE, pColoredSingleLine->lineData.mvpMatrix);
checkGlError("Line: glUniformMatrix4fv");
glLineWidth(pColoredSingleLine->lineData.lineWidth);
// Draw the line
glDrawArrays(GL_LINES, 0, pColoredSingleLine->lineData.vertexCount); // (GLenum mode, GLint first, GLsizei count);
checkGlError("Line: glDrawArrays"); // GL_INVALID_ENUM
//GL_INVALID_ENUM is generated if mode is not an accepted value.
//GL_INVALID_ENUM is generated if type is not GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT.
GLint error = glGetError();
if (error == GL_OUT_OF_MEMORY) {
LOGE("Out of Video memory error...congrats...is your device a stone ?");
}
// Disable vertex array
glDisableVertexAttribArray(pColoredSingleLine->programColoredLine.a_position_location);
}
and the rectangle:
Code:
void renderRectangle(ColoredSingleRectangle* pColoredSingleRectangle) {
if (pColoredSingleRectangle == NULL) {
return;
}
//matrixTranslateM(mvpMatrix, rndX, rndY, 0);
glUseProgram(pColoredSingleRectangle->programColoredRectangle.program);
checkGlError("Rectangle: glUseProgram");
// Enable a handle to the triangle vertices
glEnableVertexAttribArray(pColoredSingleRectangle->programColoredRectangle.a_position_location);
checkGlError("Rectangle: glEnableVertexAttribArray");
// Prepare the triangle coordinate data
glVertexAttribPointer(pColoredSingleRectangle->programColoredRectangle.a_position_location, pColoredSingleRectangle->rectangleData.COORDS_PER_VERTEX_RECTANGLE, GL_FLOAT, GL_FALSE,
pColoredSingleRectangle->rectangleData.vertexStride, pColoredSingleRectangle->rectangleData.vertices);
checkGlError("Rectangle: glVertexAttribPointer");
// Set color for drawing the triangles
glUniform4fv(pColoredSingleRectangle->programColoredRectangle.u_color_location, 1, pColoredSingleRectangle->rectangleData.color_rgba);
checkGlError("Rectangle: glUniform4fv");
// Apply the projection and view transformation
glUniformMatrix4fv(pColoredSingleRectangle->programColoredRectangle.u_mvp_matrix_location, 1, GL_FALSE, pColoredSingleRectangle->rectangleData.mvpMatrix);
checkGlError("Rectangle: glUniformMatrix4fv");
// Draw the square
glDrawElements(GL_TRIANGLES, NELEMS(pColoredSingleRectangle->rectangleData.drawOrder), GL_UNSIGNED_SHORT, pColoredSingleRectangle->rectangleData.drawOrder); // GL_INVALID_ENUM on Galaxy note
checkGlError("Rectangle: glDrawElements"); // GL_INVALID_ENUM
//GL_INVALID_ENUM is generated if mode is not an accepted value.
//GL_INVALID_ENUM is generated if type is not GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT.
GLint error = glGetError();
if (error == GL_OUT_OF_MEMORY) {
LOGE("Out of Video memory error...congrats...is your device a stone ?");
}
// Disable vertex array
glDisableVertexAttribArray(pColoredSingleRectangle->programColoredRectangle.a_position_location);
}
At any benchmarks my fps is bellow 12, while other devices has 30 or 50 fps. Believe me not my device has the best CPU + GPU by far:
some output from on old code:
Code:
printGLString("Version", GL_VERSION);
printGLString("Vendor", GL_VENDOR);
printGLString("Renderer", GL_RENDERER);
printGLString("Extensions", GL_EXTENSIONS);
....
void printGLString(const char *name, GLenum s) {
const char *v = (const char *) glGetString(s);
LOGE("GL %s = %s\n", name, v);
}
// ### GT-N8000 ### Galaxy note 10" - GL Renderer = Mali-400 MP
// GL_EXT_debug_marker
// GL_OES_texture_npot
// GL_OES_compressed_ETC1_RGB8_texture
// GL_OES_standard_derivatives
// GL_OES_EGL_image
// GL_OES_depth24
// GL_ARM_rgba8
// GL_ARM_mali_shader_binary
// GL_OES_depth_texture
// GL_OES_packed_depth_stencil
// GL_EXT_texture_format_BGRA8888
// GL_EXT_blend_minmax
// GL_OES_EGL_image_external
// GL_OES_EGL_sync
// GL_OES_rgb8_rgba8
// GL_EXT_multisampled_render_to_texture
// GL_EXT_discard_framebuffer
// ### THL V8S - GL Renderer = PowerVR SGX 544MP
// GL_EXT_debug_marker
// GL_OES_rgb8_rgba8
// GL_OES_depth24
// GL_OES_vertex_half_float
// GL_OES_texture_float
// GL_OES_texture_half_float
// GL_OES_element_index_uint
// GL_OES_mapbuffer
// GL_OES_fragment_precision_high
// GL_OES_compressed_ETC1_RGB8_texture
// GL_OES_EGL_image
// GL_OES_EGL_image_external
// GL_OES_required_internalformat
// GL_OES_depth_texture
// GL_OES_get_program_binary
// GL_OES_packed_depth_stencil
// GL_OES_standard_derivatives
// GL_OES_vertex_array_object
// GL_OES_egl_sync
// GL_OES_texture_npot
// GL_EXT_multi_draw_arrays
// GL_EXT_texture_format_BGRA8888
// GL_EXT_discard_framebuffer
// GL_EXT_shader_texture_lod
// GL_IMG_shader_binary
// GL_IMG_texture_compression_pvrtc
// GL_IMG_texture_compression_pvrtc2
// GL_IMG_texture_npot
// GL_IMG_texture_format_BGRA8888
// GL_IMG_read_format
// GL_IMG_program_binary
// GL_IMG_uniform_buffer_object
// GL_IMG_multisampled_render_to_texture
materials
OK!
i know why it's so slow, you practically draw something 300x per frame and it doesn't really matter if it has 2 trinagles or 300 per one call.
300x setup shader per frame is almost a limit for slower(not slow) devices.
Main issues:
1) do not ever call glGetError when not debugging/running debug build - it makes pipeline stall (pipeline stops and waits until all commands are executed)
2) since you have 100 lines - and using for all of them same program, why not to group them in one array ?
2 solutions
-----------------
A)
= you are not using vertex buffers - so
1) create temporary memory and pre-transform all lines/triangles into one array - will be much faster
2) setup shader once + set uniforms
3) setup attributes - start of temporary array you created (every call - believe me, it will be faster, good practice is create array on stack, but beware of overflow - but for 300 lines is practically impossible)
3) call one draw call for all lines/triangles at once
B) even faster solution (fake instancing) - this is really fast
= create vertex buffer with data (and now create this as real VBO !)
- and not with only 1 instance of line, but with reasonable count - for instance 300 (fake instancing) - 300x line data repeated,
- in tex.x,y (for instance) - set index 0.299 -> x = (float(index % 256) / 256.0f ) y = ( (index & (~255)) / 256.0 ) - using lowp vec2 inTex; attribute
= in shader:
- create uniform array with coordinates for instance uniform mediump vec3 positions[300];
- in vertex part - something like this
gl_Position.xyz = inPosition.xyz + positions[(int) ((inTex.x * 256.0) + (inTex.y * 65536.0)) ];
1) use shader
2) fill array with positions and set it to uniform
3) set attributes
4) draw - and according to situation change count of elements drawn
if you need draw more than 300 lines, enlarge array OR just do more calls this time this is getting efficient
But i suppose 1st solution will be more than enough.
Cheers.
P.
Petr Sovis said:
OK!
i know why it's so slow, you practically draw something 300x per frame and it doesn't really matter if it has 2 trinagles or 300 per one call.
300x setup shader per frame is almost a limit for slower(not slow) devices.
Main issues:
1) do not ever call glGetError when not debugging/running debug build - it makes pipeline stall (pipeline stops and waits until all commands are executed)
2) since you have 100 lines - and using for all of them same program, why not to group them in one array ?
2 solutions
-----------------
A)
= you are not using vertex buffers - so
1) create temporary memory and pre-transform all lines/triangles into one array - will be much faster
2) setup shader once + set uniforms
3) setup attributes - start of temporary array you created (every call - believe me, it will be faster, good practice is create array on stack, but beware of overflow - but for 300 lines is practically impossible)
3) call one draw call for all lines/triangles at once
B) even faster solution (fake instancing) - this is really fast
= create vertex buffer with data (and now create this as real VBO !)
- and not with only 1 instance of line, but with reasonable count - for instance 300 (fake instancing) - 300x line data repeated,
- in tex.x,y (for instance) - set index 0.299 -> x = (float(index % 256) / 256.0f ) y = ( (index & (~255)) / 256.0 ) - using lowp vec2 inTex; attribute
= in shader:
- create uniform array with coordinates for instance uniform mediump vec3 positions[300];
- in vertex part - something like this
gl_Position.xyz = inPosition.xyz + positions[(int) ((inTex.x * 256.0) + (inTex.y * 65536.0)) ];
1) use shader
2) fill array with positions and set it to uniform
3) set attributes
4) draw - and according to situation change count of elements drawn
if you need draw more than 300 lines, enlarge array OR just do more calls this time this is getting efficient
But i suppose 1st solution will be more than enough.
Cheers.
P.
Click to expand...
Click to collapse
Thanks
When I wrote I saw the glerror maybe it will slow down a bit.
Since I am coming from an object oriented development environment it was designed on that way...
The candle: line + triangle can have other data too. Maybe need to keep elsewhere the data...
The last candle line and rectangle and his color can change almost at each draw method: just 1 of the 4 params will nort change.
When the last candle is closed a new one need to be staeted
In this case needs to. Shift to left all other candles. Maybe for this I am keeping in separate methods.
I thought the shader setup is done in other part at least there I am getting the pointer to they. On surface changed/ created - I am not at PC now.
Text with price + time scale should be with a texture to a rectangle?
I will reply inline:
matheszabi said:
Thanks
When I wrote I saw the glerror maybe it will slow down a bit.
Since I am coming from an object oriented development environment it was designed on that way...
Click to expand...
Click to collapse
Object development has nothing to do with this , glGetError is NOT a function to retrieve error status from GL functions, its for debugging purposes only. Its commonly used as: (simple form)
#ifdef _DEBUG
#define CHECKGL(x) x; assert(glGetError() == GL_NO_ERROR)
#else
#define CHECKGL(x) x
#endif
CHECKGL( glUniform4f(.... .) );
or still very fast
#define CHECKGL(x) x; if (globalDebugGL) { assert(glGetError() == GL_NO_ERROR); }
matheszabi said:
Thanks
The candle: line + triangle can have other data too. Maybe need to keep elsewhere the data...
The last candle line and rectangle and his color can change almost at each draw method: just 1 of the 4 params will nort change.
When the last candle is closed a new one need to be staeted
In this case needs to. Shift to left all other candles. Maybe for this I am keeping in separate methods.
Click to expand...
Click to collapse
Again - preparing something to vertex buffer is 100x faster than calling gluseProgram, gluniform and then draw call.
Different values per "candle": you can change (per candle) color, position, tex coords, and another values - to max 16 vec4 attributes per vertex in vertex buffer.
just for your imagination what does exactly glUniform: it recompiles and rebuilds shader with new values. It's not fast at all. So drawing 2 triangles with new shader setup (different uniform values) is not very smart. You have only 800-1000 such operations per frame tops. New card can handle much more. Look for term "draw call batching" to learn more. Unity's main feature.
Just imagine, even chip in your phone can easily draw 20 textured MTris per sec and you draw just few thousands and it's sweating.
Very nice article about GLES 2.0 from apple - google for "OpenGL ES Design Guidelines" most of it is true on android as well
matheszabi said:
Thanks
I thought the shader setup is done in other part at least there I am getting the pointer to they. On surface changed/ created - I am not at PC now.
Text with price + time scale should be with a texture to a rectangle?
Click to expand...
Click to collapse
That differs according to your situation.
- Is price/time always same ? (texture)
- it's changing per frame, (prerender texture)
- it's changing per "candle" ? - I posted articles about text in gl apps in the last post.
I thing you can choose your way !
Cheers
P.
Thanks for answer.
Petr Sovis said:
That differs according to your situation.
- Is price/time always same ? (texture)
- it's changing per frame, (prerender texture)
- it's changing per "candle" ? - I posted articles about text in gl apps in the last post.
Click to expand...
Click to collapse
Price/time is the same until: the user press a button, than will change the time ( From To values) and collecting data will result a min price and a max price. So the price probably will change at that button event, but not all the time.
Also the last candle price value is "moving" at least the close value, but an move the min or max value too, which can be the min or the max of the whole chart, on those rare situations need to change the coordinate system too. Can be used here a trick as it is changing and it will add +20% so what not need to change on each tick (frame)
233528246055
matheszabi said:
Thanks for answer.
Price/time is the same until: the user press a button, than will change the time ( From To values) and collecting data will result a min price and a max price. So the price probably will change at that button event, but not all the time.
Also the last candle price value is "moving" at least the close value, but an move the min or max value too, which can be the min or the max of the whole chart, on those rare situations need to change the coordinate system too. Can be used here a trick as it is changing and it will add +20% so what not need to change on each tick (frame)
Click to expand...
Click to collapse
I would definitely use rendering text technique I already posted. Its very fast (when implemented properly) and you can "print" whatever you want.
One of the following
bitmap fonts: (more content preparations - you need to prepare texture for each font)
-------------------
font bitmap creator: google: angelcode bmfont
+ sample: google: "nehe 2d_texture_font"
freetype implementation: (easier use - free type can generate font you want - also very fast)
-------------------------------------
it's possible to compile it with ndk - i use it and it's simple
google: freetype
and code for starters: google: "nehe freetype_fonts_in_opengl"
Cheers
P.

[BIP][ULTIMATE GUIDE]Custom watchface, Music controls, Calls, Tasker integration, etc

Hi all! The Amazfit Bip is an amazing device with great hardware and battery life that would even put a Pebble to shame. However, due to its limited software, it has a lot of unrealized potential and this thread aims to provide some workarounds for the power users of XDA to make this amazing smartwatch more functional.
I will taking requests for additional tutorials so do please feel free to comment!
Note: Most of the guides here require the use of 2 paid apps, 'Tasker' and 'Tools & Amazfit'. Both are awesome apps (esp Tasker) and they do not cost much to download
Tasker: https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en
Tools & Amazfit: https://play.google.com/store/apps/details?id=cz.zdenekhorak.amazfittools&hl=en
With this, we can start now!
1. Custom watch face tutorial
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(Yes I am a big fan of Pebble's sliding text watch face, you can download this watchface here.)
This tutorial does not require the use of either Tasker or Tools & Amazfit and thus can be done for free.
1. In order to start to create a custom watch face, first you need a watch face template.
2. Connect your BIP to your android phone via MiFit app.
3. Open MiFit, goto Profile->Amazfit Bip->Watch face settings
4. Click on the ugliest watch face (in your opinion, i used the 'Number'' watch face and press 'Sync watch face'
5. Watch face should be synced to your BIP and downloaded to your phone.
6. Connect your Android phone to your PC and navigate to 'Android/data/com.xiaomi.hm.health/files/watch_skin' and transfer the .bin file to your working directory on your PC.
(For my case, the .bin file is named '9098940e097cf25a971fc917630a6ac2.bin'. That is the name of the 'Number' watch face. Your .bin file may have a different name and that is ok)
7. Download Amazfit Tools to your PC from this link: https://bitbucket.org/valeronm/amazfitbiptools/downloads/
8. Extract the downloaded file to a folder and then drag and drop the transferred .bin file on 'WatchFace.exe' A cmd window should appear and disappear in a flash.
9. A folder with the name of your .bin file should now appear in folder where your .bin file is stored. (In my case, the folder is named '9098940e097cf25a971fc917630a6ac2')
10. Open the folder that appeared and inside you can see the components of a BIP watch face
11. In the folder, have 2 main file types, .png and .json (Delete the .log file and the 2 preview files)
12. The .png files are resources for the watch face (such as backgrounds, numbers etc) while the .json file contains code for the picture placements and watch face functions.
13. Now, you can either make small edits to the preexisting .png files with a photo editor (paint/gimp/photoshop) and then recompile to a .bin file
or
14. You can replace all the .png files with your own images
(I will be going through this option more, people choosing the small edits path, after editing the resources, you can skip to recompiling the .bin, which is step 28)
15. If you choose to replace all the .png files with your own images, start creating your .png files now. Below are the requirements for the .png files
The screen size of the BIP is 176x176px, so do size your images accordingly
Must have at least 11 images, with 000.png being the background and 001.png being number 0, 002.png being number 1, so on till 010.png being number 9.
If you want to have a separate of images for minutes and hours it is possible. Your next set of images should start with 011.png being number 0.
16. Follow all the requirements and your folder will end up looking like this:
17. It is possible to edit the .json by hand but it is easier to do so with a GUI tool.
18. Go to https://v1ack.github.io/watchfaceEditor/ and upload all your custom images and the .json file respectively.
19. You may get a few error messages but just ignore them.
20. Go to the edit tab and replace the contents inside with this:
Code:
{
"Background": {
"Image": {
"X": 0,
"Y": 0,
"ImageIndex": 0
}
},
"Time": {
"Hours": {
"Tens": {
"X": 7,
"Y": 9,
"ImageIndex": 1,
"ImagesCount": 10
},
"Ones": {
"X": 36,
"Y": 9,
"ImageIndex": 1,
"ImagesCount": 10
}
},
"Minutes": {
"Tens": {
"X": 81,
"Y": 9,
"ImageIndex": 1,
"ImagesCount": 10
},
"Ones": {
"X": 110,
"Y": 9,
"ImageIndex": 1,
"ImagesCount": 10
}
}
}
}
21. This is the most basic .json watch face code, giving you only the time in hours and minutes. The code explanation is below:
Code:
{
"Background": {
"Image": {
"X": 0,
"Y": 0,
"ImageIndex": 0 \\ use 000.png
}
},
"Time": {
"Hours": {
"Tens": {
"X": 7, // position on the XY axis
"Y": 9,
//To use image 001.png to image 010.png, use ImageIndex 1,
// to use image 011.png to image 020.png, use ImageIndex 2 and so on
"ImageIndex": 1,
"ImagesCount": 10 // will go from image 001.png to image 010.png
},
"Ones": {
"X": 36, // position on the XY axis
"Y": 9,
//To use image 001.png to image 010.png, use ImageIndex 1,
// to use image 011.png to image 020.png, use ImageIndex 2 and so on
"ImageIndex": 1,
"ImagesCount": 10 // will go from image 001.png to image 010.png
}
},
"Minutes": {
"Tens": {
"X": 81, // position on the XY axis
"Y": 9,
//To use image 001.png to image 010.png, use ImageIndex 1,
// to use image 011.png to image 020.png, use ImageIndex 2 and so on
"ImageIndex": 1,
"ImagesCount": 10 // will go from image 001.png to image 010.png
},
"Ones": {
"X": 110, // position on the XY axis
"Y": 9,
//To use image 001.png to image 010.png, use ImageIndex 1,
// to use image 011.png to image 020.png, use ImageIndex 2 and so on
"ImageIndex": 1,
"ImagesCount": 10 // will go from image 001.png to image 010.png
}
}
}
}
22. In the edit tab, there are a few options for you to play around in the sidebar. Do experiment and play around!
22. You can go to the design tab and start moving the numbers according to your own needs.
23. There are many more watch face functions supported by the BIP and I recommend visiting this site. (Google translated) to learn more about them. The site is an amazing site with numerous watch face related tutorials but it is all in Spanish
24. When satisfied with your watch face, press export JSON and download the .json file to your decompiled .bin file folder. Replace the existing .json file. (For me, the folder is the '9098940e097cf25a971fc917630a6ac2' folder)
25. Download this zip file: https://github.com/v1ack/watchfaceEditor/raw/master/defaultimages/defaultimages.zip
26. Extract all the files in the downloaded file into your decompiled .bin file folder. These images are the ones used by the online watch face editor when you add functions such as date display, weather, steps etc etc
27. Your decompiled .bin folder should look like this now:
28. Drag and drop the .json file onto 'WatchFace.exe', '9098940e097cf25a971fc917630a6ac2_packed.bin' should appear in the folder (your file name should be different) alongside a few previews of your watch face.
29. Rename your packed .bin file by removing the '_packed.bin'. (In my case, my file is renamed from 9098940e097cf25a971fc917630a6ac2_packed.bin to 9098940e097cf25a971fc917630a6ac2.bin )
30. Transfer the newly created .bin file to your Android phone, into this folder: 'Android/data/com.xiaomi.hm.health/files/watch_skin' . Replace the old watch face .bin.
31. Open MiFit app and sync the watch face you synced at step 4 onto your BIP
32. Viola! You got a new custom watch face.
33. If you get any errors at step 28 (recompiling), check if you all all the images you referenced to in your .json in the folder. Also, go back to the watch editor website and check your .json file for any errors. You can comment below if you have anything that you are unsure of or need more clarifications.
2. Tasker Integration
Tasker is a very powerful Android app and if integrated with the Amazfit BIP, the amount of things that can be done with the BIP is ENDLESS!
The video below shows how I use my BIP as a TV remote control
Two clicks of the BIP's button turned off the television.
To learn more about Tasker Integration with the BIP, then expand content:
To get Tasker integration for the BIP, you need to download both Tasker and Tools & Amazfit to your Android.
Both are paid apps!!!
This tutorial assumes the knowledge of variables and flow control in Tasker.
If you have never used Tasker before, I recommend you start learning by following some of the tasks listed here. I believe that the best way to learn is a hands on approach and thus after completing a few of the Beginner tasks, move on to the Intermediate tasks and you should be well-versed enough in Tasker for the purposes of this tutorial.
Before starting this tutorial, make sure your BIP is connected to both MiFit and Tools & Amazfit apps.
1. The Tools & Amazfit app has both Tasker Event plugins and Action plugins.
2. Event plugins will trigger an action in Tasker. (e.g. button is pressed or heart rate is measured)
3. These plugins also provides variables to Tasker (e.g. the event plugin will tell you how many times the button on Amazfit was pressed).
4. Action plugins let Tasker do something on the BIP. (e.g. you can send custom vibration to your BIP or you can change heart rate monitoring mode, vibration notification mode, etc.)
5. These plugins can provide variables to Amazfit Tools (e.g. you can pass your own custom text you want to display on the BIP).
Events:
To setup event plugin in Tasker, on Profiles screen tap + and select Event - Plugin - Amazfit Tools. Then simply tap configure and choose event you want to be notified about in Tasker.
1. Tools & Amazfit supports 3 tasker events (as of 9th Feb 2018) as shown on the bottom right image above.
2. Each of these events has variables for finer even controls. The %button_pressed and %heart_rate variable supports numbers from 1 to infinity while the %charging_state variable supports 0 and 1, 0 for not charging and 1 for charging.
Actions:
To setup action plugin in Tasker, on Tasks screen, create new task, then tap + and select Plugin - Amazfit Tools. Then simply tap configure and choose action you want to perform in Amazfit Tools.
1. Tools & Amazfit supports 5 tasker actions (as of 9th Feb 2018) as shown on the right-side image above
2. These actions are self-explanatory.
Project:
Using both Tools & Amazfit events and actions, now we can create a Tasker profile to turn off the TV!
1. The Event tab and Task:
2. The event is quite straightforward, it just tells Tasker to look out when the button on the BIP is pressed
3. Task explanation:
Line 1: If the variable %button_count equals to 2 (i.e. if the BIP's button was pressed twice), run lines 2 to 6. If not equals to 2, do nothing.
Line 2: Run the command 'input keyevent 26'. This command simulates a power button press (requires root) and thus wakes the screen.
Line 3: Launch my remote control app.
Line 4: Run the command 'input tap 225 270'. This command simulates a screen touch event at pixel X: 225 Y: 270, which is the location of the power button of my remote control app. (Requires root)
Line 5: Simulates power button press to lock phone again.
Line 6: Sends the notification 'TV TURNED OFF' to the BIP.
End product, after pressing the BIP's button twice, the phone screen turns on, opens the remote control app, taps on the power button and locks screen again.
I understand that turning on the screen of your phone and simulating a tap may not be the most efficient way to code a Tasker profile, however, this is the only remote control app that works for my TV and it does not have Tasker integration and so I have to resort to such means. There are other remote control apps that includes Tasker integration and these apps can even control a smart home, thus allowing your BIP to control everything from light switches to smart TVs. Whatever Tasker can do, your BIP will be able to do the same and thus the possibilities are endless and so have fun!
3. Music Controls
When I bought the BIP, I was disappointed that it had no built-in music control functions as that was one of the main uses of my Pebble watch. Below, I will be teaching you all how to control music with the BIP.
To allow music control with your BIP, you need to download Tools & Amazfit to your Android.
This is a paid app!!!
Make sure that your BIP is connected to both MiFit and the Tools & Amazfit app before starting on this tutorial.
The Tools & Amazfit app allows you to use the BIP's button presses and sensors to control your phone's function.
Refer to the app developer's website for a tutorial on how to use the button control function of the app: http://help.amazfittools.com/knowledge_base/topics/button-control-amazfit
Due to the BIP having only 1 button, it may be cumbersome to launch any task when there are already too many preset tasks (i.e. it is not convenient to press a button 10 times just to pause music) Thus, the app developers included the ability to have different button profiles. Find out more here: http://help.amazfittools.com/knowledge_base/topics/button-control-switch-profile-amazfit
4. Picking Up Calls
The BIP has the ability to end calls but is unable to pickup calls. In this tutorial, we will be learning 2 ways to pickup calls with the BIP.
The first method requires you to download Tools & Amazfit to your Android.
The second method requires you to download both Tasker and Tools & Amazfit to your Android and it also needs root access on your Android.
Both are paid apps!!!
Method 1:
1. Tools & Amazfit app supports picking up of incoming calls.
2. Make sure that your BIP is both connected to MiFit and Tools & Amazfit app before continuing.
3. Open Tools & Amazfit app, open sidebar (swipe from left) and press contacts.
4. Press the plus button and choose 'Any Other'.
5. Tap on the Any Other option that now appears at the contacts screen and tap on 'Amazfit Button Actions'.
6. Set 'Push Button' to 'Answer'
7. In the meantime, you can also customize the incoming call notifications using the Tools & Amazfit app. Have fun!
8. However, this method does not work on recent versions on Android and thus I will be introducing the 2nd way below.
Method 2:
Video tutorial:
1. Make sure that your BIP is both connected to MiFit and Tools & Amazfit app before continuing. This method also requires Tasker and root.
2. Create new Tasker task called 'Pickup'
3. Add an action and tap Code->Run Shell.
4. Type 'service call phone 6' in the command box and check 'Use Root'
5. Press back, add task, tap on Task->Stop and type 'Pickup' in the box.
6. Create a new profile, tap on State->Phone->Call and choose incoming.
7. Add 'Pickup' as the task to the profile.
8. Add an Event to that profile. In the Event Category window, tap on Plugin->Amazfit Tools and configure as Button Pressed.
9. Now when there is an incoming call and you press the BIP's button, you will be able to pickup the call!
5. Coming soon: Google Maps Navigation
"Coming soon: Google Maps Navigation"
I'll be looking forward to this.
Great work thanks! Ordered today and looking forward to playing with this watch. Life after pebble!!
Wow, thank you. Great guide!
Making music control with tasker is good idea but I need to see song name on Bip's screen. And play/pause, next song, previous song buttons will be enough for me. I hope Amazfit Bip developers add music control feature as soon as posible.
Superb - thank you! Does the Tools & Amazfit app add Strava sync compatibility?
Deleted
JamesRC said:
Superb - thank you! Does the Tools & Amazfit app add Strava sync compatibility?
Click to expand...
Click to collapse
No it does not, sorry
Kefelon said:
Wow, thank you. Great guide!
Making music control with tasker is good idea but I need to see song name on Bip's screen. And play/pause, next song, previous song buttons will be enough for me. I hope Amazfit Bip developers add music control feature as soon as posible.
Click to expand...
Click to collapse
It is possible to create a notification on the watch whenever the song changes!
nelsontky said:
No it does not, sorry
Click to expand...
Click to collapse
Actually Notify and Fitness can sync your activities with Strava or Runkeeper (or create a raw .tcx file) but you have to buy the third party export IAP which is separate from the Pro one.
Matchstick said:
Actually Notify and Fitness can sync your activities with Strava or Runkeeper (or create a raw .tcx file) but you have to buy the third party export IAP which is separate from the Pro one.
Click to expand...
Click to collapse
I may be able to figure out a way to do that without these apps
Can you please help me use Amazfit BIP to display Prayer timings and Qibla Direction(Muslim) just like Al fajr watches??? I will be grateful.
I want the watch work independently to display the prayer timings for any given locations. your support in this regard will be highly appreciated.
Thank you for the post. Do you know when you have time for the Google Maps Navigation guide? I am still undecided if I want to buy a BIP.
m0ritz said:
Thank you for the post. Do you know when you have time for the Google Maps Navigation guide? I am still undecided if I want to buy a BIP.
Click to expand...
Click to collapse
I am currently having some difficulties with google maps navigation as they changed their direction instructions to pictures (and I cannot grab all the possible images) I kinda have it ready for OSMAnd however OSMAnd does not work as well as gmaps.
awankufri said:
Can you please help me use Amazfit BIP to display Prayer timings and Qibla Direction(Muslim) just like Al fajr watches??? I will be grateful.
I want the watch work independently to display the prayer timings for any given locations. your support in this regard will be highly appreciated.
Click to expand...
Click to collapse
It is not possible to display such information on the watchface directly. It may be possible to create a notification for these though and you can be able to view the information on your watch upon a button press/a series of button presses while connected to your phone. If you are fine with such limitations do reply and we can work something out together and add it to this thread
That would be awesome. Thanks a lot for taking interest in my. I really appreciate it. If this info can be displayed on watch without using smartphone...
awankufri said:
That would be awesome. Thanks a lot for taking interest in my. I really appreciate it. If this info can be displayed on watch without using smartphone...
Click to expand...
Click to collapse
Without smartphone would be impossible as if I am not wrong the prayer time kind of changes everyday right? There is no possibility of importing a timetable to the bip due to its lack of function. For qibla direction wise, the bip actually has the hardware to calculate the direction but due to it being a closed system, it is impossible to create an app on the watch to do so. You can technically find the direction by subtracting your coordinates from the qibla's one but thats too tedious.
Thanks a lot brother for the reply. we (Muslims) pray five times a day. timings are based on 2 parameters.
1. movement of SUN &
2. location
for example prayer times for Newyork will be different from Dubai. for reference check this app
https://play.google.com/store/apps/details?id=com.haz.prayer&hl=en
Please. even if with the button we can display such info, it would be great..
nelsontky said:
I am currently having some difficulties with google maps navigation as they changed their direction instructions to pictures (and I cannot grab all the possible images) I kinda have it ready for OSMAnd however OSMAnd does not work as well as gmaps.
Click to expand...
Click to collapse
Hi,
a good guide for OSMAnd would be helpful too. So would be very nice to post it aswell.
Cheers
Is it possible to set up workouts or pace notifications for running? Like have it tell you to run for 0.3 miles then rest for 30 seconds, then run again etc. Is it possible for it to remind you if you're running below a certain pace e.g buzz if you go above a 7:30 min/mile pave?
Hi. So i am looking for customization. However I use an iPhone and have a spare android phone. If I buy the all amazing tools and amazfit app can I not have my amazfit bip connected to the phone all day (during night time as well) and expect to get all the data and features of the app when I connect to the app once a day?
---------- Post added at 07:32 PM ---------- Previous post was at 07:27 PM ----------
Also please tell me if theres a way to track GYM routine on amazfit like resistance and strength training

I need help with my wallpaper app in kotlin

I am making a wallpaper app which i was testing in my phone. I used Android 9 Api 28 for the testing purpose. My app has 2 buttons one to set wallpaper and other to set lockscreen wallpaper. My home wallpaper button is working just fine. But my lock screen isnt. First it was showing error when i wrote it. When i hovered over it, it said this feature is supported by API 24 and above so it was giving option to surround it with a if check. I clicked on it and it got it in the if check but error was gone but it wasnt working in my phone still. No logcat error as well. No crashes. I tried it on my brother's phone android 10 api 29. It worked fine. It was meant for api 24 and above but why its not working in api 28 but working in api 29. What should i change in the code to make it work?
What I have tried:
I tried the below mentioned code for the lock screen wallpaper. fl_iv in the code is a framelayout containing an imageView which is showcasing the wallpaper image.
val result: Bitmap = fl_iv.drawToBitmap()
val wallpaperManager = WallpaperManager.getInstance(this)
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
wallpaperManager.setBitmap(result, null, true, WallpaperManager.FLAG_LOCK)
}
} catch (ex: IOException) {
ex.printStackTrace()
}

Categories

Resources