Moving this blog to “”

June 27, 2010 1 comment

This blog has moved to a new address:

Two new posts are available already:

See you there!

Categories: Web

Google Technology User Group in Tel Aviv

May 18, 2010 1 comment

GTUG TLV took place today in Google Tel-Aviv offices. I visited Haifa and New-York offices in the past but never been in Tel-Aviv one so it was nice to see it for the first time, of course.

It was somewhat surprising but the office is not that fancy, as other people may think of it. I mean, it’s totally fine and you get some nice colored walls painted in Google colors. I couldn’t see the whole office so who knows what was left but from what I saw it was just a normal office. The kitchen is Ok, two pool tables next to it and a medium-size conference room where the event actually took place. I was told once it’s a temporal location and Google Tel-Aviv would move into its own campus in Tel-Aviv university. So it may explain the lack of usual luxury you get to see in other offices.

But a bird’s-eye view on Tel-Aviv is truly fascinating:

Anyway, it wasn’t the office we came to see today, of course.

I stayed for three sessions:

Unfortunately, I left earlier thinking I’m only missing the Python part (I’m less interested in) and forgot there’s “An introduction to Google Closure” (video) which I really wanted to hear. Shame on me!


The following JavaScript patterns were nicely presented by Zohar Arad:


I don’t think there’s a need to elaborate on it 🙂 It’s just Something.getInstance(), as usual.


A re-usable and self-containable unit that can be instantiated and used for performing certain tasks.

var Module   = function( o ) { .. }; // A general function for converting object to module
var MyModule = new Module({ .. });   // Specifying module's definition
var m        = new MyModule();       // Instantiating the module, start using "m"


Callable unit that implements unique internal logic, exposing uniform API to consumers, similarly to Facade. It is used mostly in various browsers, implementing differently a certain functionality, like Web Sockets that was given as an example. So we implement IE engine, Firefox engine, Chrome engine, and Opera engine and override certain methods.


Allows not to call some other unit directly but posts DOM events using commonly known names to which other party responds, since it has a listener registered. Used heavily in Flash to JavaScript communication, freeing from a need to know JavaScript method names and being forced to recompile Flash each time JS code is refactored. Putting a layer of isolation between them loosens the coupling a bit as there are no direct calls any more, one technology just fires events for another one. All major JS frameworks support customized event names like 'SomethingHappenedEvent' and here we assume event names won’t be changing as frequently as method names.

We had a comment from the audience about an importance of detaching event listeners on unload to clean up the references and prevent memory leaks that otherwise may be catastrophic, especially in Flash. And this kind of “unregistering” better be done automatically.


Adding lots of HTML to the page is better done by creating large String snippets rather than by manipulating DOM. When creating them, various templating techniques and syntaxes can be used. Naturally, when implementing it manually, ReGex-es are somewhat slower than a simpler String operations.

Btw, here’s the pattern to iterate through object properties, only those it actually owns:

for ( prop in o )
    if ( o.hasOwnProperty( prop ))
        var j = o[ prop ];

Being Groovyst in my heart I was dying to replace it with something like:

for ( j in o.findAll{ o.hasOwnProperty( it ) }.collect{ o[ it ] })


Second presentation, “Android 101” by Dror Shalev from DroidSecurity was a killer! I always said you need two things to make a successful presentation: the material itself should be rock-solid and interesting, of course. But you also need a presenter who’s making a show out of it. When those two meet together .. well, that’s what people remember later for years, I believe. And that’s exactly what Dror demonstrated us today.

It was very interesting to hear how hackable Linux, Dalvik VM and Android are and how important it is to have a decent “antivirus-like” application on each Android phone. The problem is that neither of applications available in Android Market is checked by someone. It’s a free world for everybody. Bad things may (and do) happen if malicious software is installed. Now, DroidSecurity solution doesn’t scan files , it can’t, but it analyzes what’s happening and what’s running on the phone, communicating with company servers. To see it in the market, go to “Top Free/Paid”, choose “Communication” category and scroll down to “antivirus free” or “Antivirus Pro” (too bad it’s impossible to provide a direct link .. hmm, somebody?)

We were also told there’s an iAndroid community for Android developers in Israel.


Third presentation, “Django AppEngine based platform building site” was about 9Folds, a CMS engine built by Alon Burg for quick Web sites creation using Django templates and AppEngine-deployed .. eeeh, engine. I’m not experienced with Django so I could less appreciate the beauty of it but Alon mentioned some nice AppEngine caching techniques. He’s also using a rather clever trick to upload template files to a “live” site: pushes them to a public GitHub repo where they’re downloaded from by an application code. What a brilliant idea, browser upload, be it Flash or not, doesn’t come close to "git push", of course.

Alon has also mentioned Varnish, a “high-performance HTTP accelerator”.

Take a look on 9Folds gallery or browse Narayan World to see how results look like.

So .. It was a very intense evening and I didn’t even see it all!
Many thanks to Omri for organizing and running this event!

Categories: Google Tags: ,

Artifactory Online – the case of distributing Groovy++

May 5, 2010 4 comments

After working with open source Artifactory version and thoroughly exploring it’s add-ons I knew it would come a moment to put my hands on its cloud solution – Artifactory Online. It just made sense to “close the loop” this way .. Тhe moment I’ve heard about online instance, running 24×7 without having to take care of anything – it sounded really, really nice.

I can’t say we spend a lot of time administering our open source version in Thomson Reuters. Quite the opposite – I only need to take it down for upgrades from time to time. But it still takes us a machine. A virtual one, of course but still – that’s CPU and memory that could be well spent somewhere else. So having online instance not only gives a peace of mind freeing everybody from taking care of one more server and one more database – it frees some hardware resources as well.

Well, a beauty of cloud computing, when it works. And Artifactory certainly does!

But my first use of Artifactory Online was for slightly different purpose – Maven support for the Groovy++ project. There was a clear need to host Groovy++ binaries in a public Maven repo.

What options are available today?

  • OSS repository hosting from Sonatype.
    It’s a good free solution but like any other free solution it only provides you so much: if you don’t mind being at mercy of other people with certain demands about how your POMs should look like – then it’s a good way to go. But I’d prefer my personal repository, where I can configure it the way I want without sharing it with other projects and asking favors. Also, bear in mind you get no security whatsoever – all your binaries would be open to everybody, anytime. And it only works for open-source projects which can be another showstopper.

  • Another option is to host a public Apache or nginx server and just make the files available following Maven’s naming conventions, like it’s done on "repo1". Not to mention the lack of security (again) – this kind of storage can be fragile to files corruptions: after all it’s just a dumb files storage, not an intelligent repo manager. You can’t use Maven to deploy artifacts and it provides no additional services, like virtual repos, artifacts searching or usage statistics.

  • Public hosting of open-source Nexus or Artifactory – it’s much better and we can finally protect it the way we want. But we still need to pay for hosting, memory usage, bandwidth usage and we now need to install and administer it on top of everything else. And put some extra protection, may be.

  • Artifactory Online. The best way to go, if you ask me. Not only it provides a cloud-based 24×7 running Artifactory instance, but it does so with all add-ons installed, so you really get it the ‘Full Monty’! It solves our original problem, to host binaries in a public Maven repo, but it doesn’t stop there, as I’ll show shortly – running a private Artifactory instance brings other advantages to your projects.

We’ve settled with last option, meet!
The initial setup went very fast as there were very few things to take care of, actually:

  1. Registration

  3. Creating deployment user and "settings.xml". The fact that Artifactory provides a way to generate new “settings.xml” (skip to 00:10:40) and store an encrypted passwords (00:11:55) comes in very handy:

            <password>\{ABAeqq\}pIcMooZ8G/2Y2drgC99SDw==</password> <!-- Sample -->
  4. Instructing Maven about new repositories:


As you see, we’re using new repo not only for <distributionManagement> but as our only Maven repository.
From now on we only talk to
This is “virtual repository”, a “gateway” Maven will connect to for retrieving any 3-rd party library. I can now add additional Maven repositories by editing it in Artifactory, there’s no need to update the POM any more when new external repos are added to the project.

After this quick setup I ran it for the first time. I was expecting somewhat slower performance than the one we have in the office where Artifactory is running on the same network. After all, we’re talking here about remote repository running somewhere across the ocean:

But the download was pretty fast. It depends on the bandwidth, of course but I can’t say that significantly more distant repository has slowed me down. UI was very response and Maven’s filling of empty local repo was fast enough as not to notice any significant difference. Good!

Groovy++ project is now happily using Artifactory Online for several months and releases, you’re always welcome to download the latest version manually or give it a try with Maven.

What else can I say about running a private repo like that?

I think the main beauty of it is being able to “go public” in matter of minutes. No setups, no worries – you have your very own binaries storage, intelligent and secured that can be used for any purpose. That’s right, Artifactory can serve any binaries, not necessarily Maven’s artifacts. So one can store there practically anything and then secure or backup it safely.

Makes me think of various App Store services where people publish their Android / iPhone applications and enjoy the ride. That’s good, I believe “going public” should be easy for anyone today – this way creativity meets no entry barriers!

I only have a single request to Artifactory developers – an option to create aliases to existing repo. This would allow to reuse the same repository for different projects or purposes: and will point to the same Artifactory instance but will be used by different people.

Overall, a very pleasant experience!
Exactly what I was expecting – can’t help it but these guys never disappoint 🙂

“I’ll send them an e-mail” – do you really believe it’s enough?

May 1, 2010 Leave a comment

In my career I used to work as CM engineer many times. It so happens I just love dealing with builds.

When you’re a CM engineer – you usually get to change many things around: how builds work, where’s the main POM, how jobs are configured in Hudson and when they’re scheduled to run.
Lots and lots of things.

What happens when other people need to be informed about those changes?
Or they actually need to do something about it ?

For example, when our Artifactory instance has moved to a new server – all developers had to update their "settings.xml". Technically, this situation brings no issues – it doesn’t take a lot to make the move, the time is only spent for exporting the data and importing it back. But it’s still very tricky as lots of people are involved now: when old Artifactory instance is shut down – lots of builds will fail if "settings.xml" isn’t updated.

So I’ll just send the new file by e-mail, right? Wrong.
Many people think e-mails is the best way to “let people know” but, unfortunately, in many offices it doesn’t work that way.

Mails can be ignored so easily.

Take a look at average user’s “Inbox” – how many unread mails will you see there? 10, 50, 300? I usually keep my “Inbox” empty – all incoming mails are filtered into relevant folders so when I get to the office in the morning I see the overall picture right away: 3 mails from my boss, 5 mails from our group, 2 mails from QA, etc. Very few mails are usually left in “Inbox” and it allows me to know what’s “unread” right now so I can easily decide who do I ignore or start with first. I try not to have any unread mails by the end of the day so that my counter drops to zero when I go home.

But many other people have an “Inbox” full of all last year’s e-mails and when they see “30 unread messages” – they don’t really know what’s there until they go over all of them, one-by-one. And when they do read the message – they can choose to ignore it or misunderstand the importance of the change.

That’s life and and we can talk endlessly about how educating people about time or mail management will help .. Fortunately, there’s a simpler way – we can talk to them.

In our place, it only takes an hour or two to step into all rooms, announce the change and get the feedbacks. And it works much, much better as people now get to talk back. I let them know there’s an e-mail sent, I make sure the importance of the change is clear and, of course, I try to listen to what they have to say.

When I was first suggested to go room by room and talk to people I was wondering how come such a waste of time can be of any use. After all, it’s all written nicely in mail or Wiki – why bother then? But after doing it once I’ve felt how much difference does it actually make. How much information and attention I had back from people.

We can schedule a meeting for the whole group, of course! Whatever works. The main idea is simple – there’s no replacement to the talking. I came to believe it’s the combination of published information (that can always be referenced later) and verbal communication that provides the best way to “let people know” and ask for their cooperation.

And it also makes everyone involved feel much better along the way!

Categories: Misc Tags: ,

Uploading files – multipart HTTP POST and Apache HttpClient

May 1, 2010 63 comments

I had to implement a files transferring mechanism where one machine was sending files to another using "multipart/form-data" POST request. It can be done using Apache’s Commons FileUpload and HttpClient.

The receiving part was an easy one:


We parse an incoming request with ServletFileUpload and get a list of FileItems in return. Each FileItem is either form’s input field or a file uploaded:

if ( ServletFileUpload.isMultipartContent( request ))
    List<fileitem> fileItems = 
        new ServletFileUpload( new DiskFileItemFactory( 1024 * 1024, DIR )).
        parseRequest( request );

    for ( FileItem item : fileItems )
        String fieldName = item.getFieldName();

        if ( item.isFormField()) { item.getString()      } // Form's input field
        else                     { item.getInputStream() } // File uploaded

In our case, we use DiskFileItemFactory to store files larger than 1Mb in a temporary DIR. After reading file’s InputStream and storing the data in a proper storage – we need to delete the temporary copy: item.delete().

It’s the sending part that came out to be a bit trickier. Initially, I was using a simple HTML form:

<form action="http://localhost" method="post" enctype="multipart/form-data">
    <input type="file" name="file">
    <input type="text" name="paramName">
    <input type="submit" name="Submit" value="Upload File">

But then I’ve switched back to Java and assumed HttpClient will do the job.


Eventually, it did but it took me some time to figure out how. The problem with HttpClient is that it provides a nice tutorial and various usage examples but none of them actually mentions a word about uploading files!

I’ve figured out I need to set an instance of HttpEntity to request but it seemed like it’s going to be either a StringEntity or FileEntity but not both. How come ?! Why is it so hard to send a usual POST request with String and file parameters?

Ok, it’s Google time.

Some examples and documentation were referring to an outdated version when HttpClient was part of Apache Commons and, therefore, were of no use for me – the API has changed dramatically. Until I’ve found this example that finally saved my day. Radomir, thank you!

The solution is to use an additional Apache component – HttpMime:



and then we finally get to use a magical MultipartEntity:

HttpClient client = new DefaultHttpClient();
client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1);

HttpPost        post   = new HttpPost( url );
MultipartEntity entity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE );

// For File parameters
entity.addPart( paramName, new FileBody((( File ) paramValue ), "application/zip" ));

// For usual String parameters
entity.addPart( paramName, new StringBody( paramValue.toString(), "text/plain", 
                                           Charset.forName( "UTF-8" )));

post.setEntity( entity );

// Here we go!
String response = EntityUtils.toString( client.execute( post ).getEntity(), "UTF-8" );


Note the use of EntityUtils for reading the response.

That’s it.

I only wish library authors were providing better support and examples for more common cases like files uploading in our case. I mean, come on, when people get to use HttpClient they either want to send a usual request or upload a file, same thing they do with browser. Am I wrong here?

Categories: Java, Web Tags: , , , , ,

Groovy++ goes APL 2.0!

April 19, 2010 Leave a comment

Groovy++ goes APL 2.0!

This is obviously great news to Groovy/Groovy++ community.

Categories: Groovy Tags: ,

10 Online Tools for Superb Productivity

April 17, 2010 25 comments

I love being productive.

I love it to the point where I actually hate being slowed down by an application or a resource. I don’t mind waiting but only for a good reason. Anything that makes me stare at the screen doing absolutely nothing will usually drive me into searching for a faster replacement right away.

“Being productive” starts from “working fast” and “using the best tools” for me. So my favorite on-line tools is what I would like to write about today. Had I written this review several years ago – most definitely, I would have talked about “10 Windows Application for Superb Productivity”. But Web is where I/we spend most of the time today so it makes more sense to talk about Web applications rather than various Windows tweaks.

1. Google Chrome

Obviously, living on the Web starts with a browser. Like many others, I was a devoted Firefox user for the last 5 years. After all, it was the only choice that actually made sense on Windows. When Chrome has initially come out – I wasn’t impressed much but lot’s of things have changed since then.

So .. why Chrome today? For one reason, mostly – it’s fast and I’ve mentioned already how working fast is important to me. Chrome’s start-up time is light-years ahead of Firefox and no restart is required when extensions are (un)installed. Those two factors add up to a tremendous speed-ups when working on-line, as I wait much less now.

2. delicious

Keeping bookmarks online is an old idea and being able to tag them isn’t novel as well. Today I use delicious as my main storage of everything I ever read and find it useful for later reference. Cars, tablet PCs, video sessions and travelling – it’s all there, anytime, anywhere.

Using browser’s keyworded searches I access a tag by typing "d tag" ("d tag1 tag2" for combination) and search delicious with "ds search" – it works amazingly fast allowing me to pull almost anything from my last year of browsing in a matter of seconds. This "d(s) something" thing is what I believe I type the most in browser’s address bar today.

With it’s Chrome extension being supported in Chrome Dev channel (finally!) – I now enjoy it even more. But I still keep bookmarklet around, it’s in the left side of my bookmarks bar so I use either extension’s button or a bookmarklet to add a link, whichever my mouse is closer to.

3. Zoho Writer

Working online means keeping notes and documents. Zoho Writer is my #1 application of choice now – it’s fast and it looks really great. Ironically, I have heard of it when Microsoft’s “fake Office” has made its way into a blogosphere. So, yes, this “fake Office” works pretty well for me now – all my private summaries, notes and drafts are there.

I only wish:

  • I could export all documents at once, as a backup copy.
       Whatever they say – I never trust “the cloud” completely, making a backup copies even
       of my Gmail account.
  • “Google Sign In” would sign me in transparently.
       After opening “” I’m forced to click a “G” button to enter. This extra “G”
       click may sound not as a big deal to many but when one gets used to “Remember me”
       allowing to access resources and documents with a single click – this extra delay is quite
       painful, actually.
       It really defeats the way I believe the Web should work – one single click to get me “in”.

4. Zoho Notebook

It’s not hard to get lost in all my Zoho documents and sadly, I still don’t get its way of tagging. But I now use Zoho Notebook as a way to organize related docs as “books”, grouping them together. I can edit them in Writer or Notebook, it doesn’t matter. But working in Notebook is significantly slower, though.

Of course, it’s intended for OneNote-like documents but I mostly use it as my “tagging” mechanism. A real OneNote is something I use a lot in the office.

5. Mindomo

After getting used to on-line mode of working it doesn’t come natural to install any desktop mind-mapping application, like FreeMind. Searching for online solution brought me to Mindomo and I have to tell you .. it’s beautiful.

Surprisingly, it’s way, way better than mindmeister that I’ve heard of so much recently.

Too bad it suffers from the same “Sign in with Google” extra click, as Zoho does. How come there’s no “Remember me” option for those case?!

6. Dropbox

Keeping files online is pretty standard today, but lot’s of applications have failed on delivering a good upload process, relying on browser’s capability to upload files. Trying to upload a bigger file usually resulted in broken connections and lot’s of frustration. Few resources cared to provide desktop “uploader” dealing with slow and unreliable networks.

YouSendIt has one and it’s excellent, I was using it a lot for a number of years. But free YouSendIt version doesn’t keep files forever while Dropbox does. Also, Dropbox has a native service installed, monitoring and syncing a certain folder: all I need to do in order to upload a file to the cloud and sync it with all my machines is to copy it to "e:/Data/Dropbox/My Dropbox". That’s it! After copying a file at home I find it available on my office machine when I get there.

Can it be any simpler than that?!

I even use Dropbox for transferring files from virtual to hosting machines until I get to making “shared folder” work.

7. HootSuite

Twitter is my main source of new information. Keeping an eye on what’s happening is a real “must” today. But being able to do so in 4 columns is an awesome thing!

8. Chrome – SendLink

I can send a quick mail containing current link with two clicks only (you do remember I always count clicks, right?), without having to actually type or copy anything. That’s fast.

9. Chrome – URL Shortener

Another “one-click” favorite: URL shortened is copied to the clipboard when I hit extension’s button. Dropbox can be improved when doing the same – it’s a two-step process there:

Immediate social sharing and keyboard shortcuts are available.

Less is more and it’s nice to see how Googlers count clicks as well. I guess it’s bad we can’t go down any further from one click. Zero clicks! How about that ? 🙂

10. Chrome – Tweetings

While HootSuite is great for reading Twits – I use Tweetings for posting them. It’s quick and it remembers the text entered even if I switch the tab to grab a shortened URL. It changes color to notify me on Twitter “mentions” and “replies”, what a great little handy cute tool.

11. Online dictionaries: Yandex and Dictionary

It would be not fair to leave out on-line translators. As previously, keyworded searches are my friends here.

"tr anything":

"dic make":

That’s it!
Those were my 10 most favorite online tools making “living on the Web” very enjoyable and productive.

What are yours? I would love to hear.