I entered the world of smart phones yesterday. Took the plunge and bought a slightly used Android LG G2x. Why used? I wanted the option of being able to switch cell phone providers when my contract is up with TMO. I bought it off CraigsList a little weary about whether it would completely work or not. I also wanted to make sure to check the phone for the recent bad batch of G2x phones now emerging with screen problems. I ended up getting the phone and checked it over. Made sure to factory reset the device.
After restarting I found out that the wireless was not working! Umm… now what!?! I opened the browser and went to a bunch of sites. Nothing worked! Crap… it’s hardware. Then I randomly went to my router (192.168.1.1). Strange, it worked! What the!?! So wireless works going to a direct IP address but not to google.com. Now it’s time to google (using my computer this time and I came across this solution on this thread which had an answer for me:
- Factory Reset
- Skip initial setup
- Turn on airplane mode (hold down Power button for 3 sec)
- Turn on Wifi
- Setup Google Account/Market
- Shutdown phone/Reboot ( Don’t know if you need this)
Bingo! Phone working
great ever since!
UPDATE: That didn’t work completely. The problem came back. I ended up removing the “My Account” problem and now it actually has been working all week! Finally… now just to wait for the Android 2.3.3 Ginger update!
This phone as a great 8mp camera and wanted to compare it against my Exilim.
Let’s compare the pictures:
G2x (Left) and the Exilim (Right)
Interesting. This is a zoomed up picture on both the G2x and the Exilim. The 7.2 mp Exilim point and shoot does better at brightness. This is just my first pic on the G2x so there may be some settings that could be tweaked to clean the picture up as well. Notice though that you can almost read the words with the G2x whereas not so much with the Exilim.
Now onto rooting the phone Maybe I can remove some of this TMO pre-installed apps. They locked it down so you can’t uninstall without super user access.
The exchange of information is greater today than ever before in our history. This means that the threat of evil misusing these standard channels of communication to send stolen information or a plan against a nation is more likely now than ever (Goel, Garuba, Liu, & Nguyen, 2007). After the attack of 9-11 it was believed that terrorist were hiding information in pictures using steganography. The goal of this research paper is to discuss ways in which steganography can be detected.
To understand methods of detecting steganography, it is necessary to define what steganography is and how it works. Steganography is a general term for a process of hiding information. The word Steganography has its origin in the Greek language, meaning “covered writing” (Johnson & Jajodia, 1998). Most commonly steganography will be used to hide messages.
Steganographic data is mostly hidden within a picture file. The data or message can be stored as a picture hidden within a picture or in the lower level bits of a picture file. A few other common file types which can be utilized by steganography include audio and document files. These are more difficult to create then a picture file. They do offer increased protection against detection (Artz, 2001).
The data is either stored as a picture hidden within the main picture, or in the data that makes up the bits of the picture file. Other common file types utilized by steganography include document files and audio files, both of which are much harder to create then a steganographic picture file, but offer increased protection against detection (Artz, 2001).
Like many security applications and tools, steganography can be used for a variety of reasons. Some good purposes could include things like watermarking images for reasons such as copyright protection. Digital watermarks (also known as fingerprinting, are especially important in copyright material) are similar to steganography in that the watermark is overlaid and appear to be part of the original file and are thus not easily detectable by the average person. Another use would be to tag notes to online images.
Read entire paper @ google docs.
This would be me, if I was at Google I/O
I was able to attend the Google IO Extended downtown St. Paul. I signed up right away for http://music.google.com… just waiting for an acceptance to my invitation request… anytime now!
This is really cool. I have moved this blog and the dozen or so other websites I maintain to this new server. I am pretty excited about the initial results. Though I shouldn’t be surprised by any means. It is just exciting to see the difference.
The old server and slow internet:
On stella it got the worst rating, as it should.
Now for the performance of the Amazon EC2 Micro instance:
Talk about a huge difference in time. It is really the performance of what the website should be.
So far the cloud and the Amazon EC2 setup has not been bad at all. I configured an Ubuntu 10.10 Maverick x64 instance.
The documentation is pretty good. It was a bit difficult figuring out which ubuntu cloud instance I should choose from. I ended up finding the following page useful:
https://help.ubuntu.com/community/EC2StartersGuide#Getting the images
And then to figure out how to login using PuTTY on a windows box since you can’t use the normal username/password combination:
All in all – the information is available once you can find it.
I have obviously heard all the hype of the cloud infrastructures. Never had a need to jump in and try it though. I have been happy with my server machine that happily runs 24/7 using ubuntu. I have had it running over a full year without having to restart that machine even once. I made the mistake of looking at performance of the server. Not a good idea. I quickly realized that some of the websites I was hosting need to be on a more production like server. Now where do I turn? Maybe the cloud was my answer?
I really enjoy controlling the server so I immediately ruled out Google’s App cloud. It also ruled out the old $7 dollars/month domain hosting sites. Though some do allow ssh access they do not allow you to do upgrades on the server. I am down to only a few select. There may be others out there but it came down to Slicehost and Amazon EC2. Basically, since Amazon had the free year and I have heard so many good things about it. It ended up being a easy choice. The only probably is how much I will pay after the free year is up. I might end up going back to my old server if the cost is too high. Time will tell
I personally went with the Ubuntu Maverick x64 bit install. Here is the details – https://help.ubuntu.com/community/EC2StartersGuide
Here is some help for connecting with PuTTY for the first time:
I haven’t been back to the single-line command input on Firebug for a long time but over the weekend I noticed it had auto complete. Chrome has had this for a number of releases but I thought the way Firebug implemented it was very useful!
At CodeFreeze this year, I went to the User Experience (UX). They went through things that I have picked up from our UX team but it ended up providing valuable reminders.
One such idea is that on our team we can do user experience testing during requirements, prototype and final stages of development. Persona’s are necessary, of course, during all stages of development. It is key to understand your customer and how they end up using the system.
Here are a list of things to think about and general tips when going to a customer site to do user testing:
- Look at the users cube, cheat sheets, etc.
- Users will often say and do things differently
- Record their interactions with others
- How often do they use the software? all day? occasionally?
- Who conducts the test? Business as well as ux team members. Business / Developers can also be great observers.
- Take a story and go through it as if you were this person.
- Getting context around what the person is, what they are doing, etc.
- Quote: “Take a user to the edge of the cliff and then watch them step off. Let the user struggle a bit and then come back and ask questions later.”
- Question you can ask when the user is struggling: Do you see anything else that would help you?
At this point, it sounds like IndexedDB will be more highly adopted then WebSQL.
Mark West did a nice overview. I summarized a his comparison slides below:
• A real, relational db implementation on the client (SQLite)
• Data can be highly structured, and JOIN enables quick, ad-hoc access
• Big conceptual overhead (SQL)
• Sits between full-on SQL and unstructured key-value pairs in “localStorage”
• Asynchronous, with moderately granular locking
• Joining normalized data is a completely manual process
Code Comparisons: http://hacks.mozilla.org/2010/06/comparing-indexeddb-and-webdatabase/.
When I ran mysqld I would get:
> 090127 10:00:30 InnoDB: Operating system error number 13 in a file
> InnoDB: The error means mysqld does not have the access rights to
> InnoDB: the directory.
> InnoDB: File name ./ibdata1
> InnoDB: File operation call: ‘open’.
> InnoDB: Cannot continue operation.
Finally I looked in /etc/mysql/my.cnf and figured out that my bind-address needed to be updated. So took about an hour to figure that one out. It would be best if I could just leave it to localhost for the bind_address but I can not do that right now.
For others where it is not the bind_address it could be number of other things. I started reading through this thread: http://lists.mysql.com/mysql/216042. It was very helpful to think through the scenarios.
Myself and another have been trying to figure out how to modify the environment variables programmaticly using groovy in Hudson. After a little of tinkering we were able to do it.
hudson.save() //This is needed in order to persist the change
Then if you want to expand this for the slaves: