Quantcast
Channel: Planet Eclipse
Viewing all 6595 articles
Browse latest View live

Philip Wenig: RCPTT – set a file from the workspace dynamically

$
0
0

I’m quite amazed how powerful RCPTT is!
https://www.eclipse.org/rcptt

Recently, I’ve encountered a problem. The task was to set the path of a file into the file selection dialog, but it should be independent of the currently used test system. The first try was to set the file directly, but the path is hardcoded:


set-dialog-result File "/home/testman/rcptt/HelloWorld.txt"
get-window -class WizardDialog | get-button "Select *.txt" | click

Hence, a better way is to clear the workspace and import the needed files into the workspace before running the tests. Therefore, a context can be created within RCPTT. The workspace is resolved by “workspace:/” in RCPTT. This could be a solution, cause the workspace is created independently – different systems, different locations. Hence, we could try:


set-dialog-result File "workspace:/com.acme.rcptt/HelloWorld.txt"
get-window -class WizardDialog | get-button "Select *.txt" | click

But this won’t work. We should have a look at the ECL commands:

http://download.xored.com/q7/docs/ecl-api/latest

Instead, we can get the workspace location by using the ECL command “get-workspace-location” and concatenate it with the path to our file in the workspace:


set-dialog-result File [concat [get-workspace-location] "/com.acme.rcptt/HelloWorld.txt"]
get-window -class WizardDialog | get-button "Select *.txt" | click

Tadaaa, that’s it!


Alex Blewitt: Finding duplicate objects with Eclipse MAT

$
0
0

I’ve written before about optimising memory in Eclipse, previously looking at the preponderance of new Boolean() (because you can never have too many true or false values).

Recently I wondered what the state of other values would be like. There are two interesting types; Strings and Integers. Strings obviously take up a lot of space (so there’s more effect there) but what about Integers? Well, back when Java first got started there was only new Integer() if you wanted to obtain a primitive wrapper. However, Java 1.5 added Integer.valueOf() that is defined (by JavaDoc) to cache values in the range -128…127. (This was because autoboxing was added in with generics, and autboxing uses Integer.valueOf() under the covers.)

There were other caches added for other types; the Byte type is fully cached, for example. Even Character instances are cached; in this case, the ASCII subset of characters. Long values are also cached (although it may be that the normal values stored in a Long fall outside of the cacheable range, particularly if they are timestamps).

I thought it would be instructive to show how to use Eclipse MAT to see how to identify what kind of problems are and how to fix them. This can have tangible benefits; last year (thanks to the kind reminding of Lars Vogel) I committed a fix that was a result of discovering the string www.eclipse.org over 6000 times in Eclipse’s memory.

Once you install Eclipse MAT, it doesn’t seem obvious how to use it. What it provides is an editor that can understand Java’s hprof memory dumps, and generate reporting on them. So the first thing to do is generate a heap dump from a Java process in order to analyze it.

For this example, I downloaded Eclipse SDK version 4.5 and then imported an existing “Hello World” plug-in project, which I then ran as an Eclipse application and closed. The main reason for this was to exercise some of the paths involved in running Eclipse and generate more than just a minimal heap.

There are many ways to generate a heap; here, I’m using jcmd to perform a GC.heap_dump to the local filesystem.

$ jcmd -l
83845 
83870 sun.tools.jcmd.JCmd -l
$ jcmd 83845 GC.heap_dump /tmp/45.hprof
83845:
Heap dump file created

Normally the main class will be shown by JCmd; but for JVMs that are launched with an embedded JRE it may be empty. You can see what there is by using the VM.command_line command:

$ jcmd 83845 VM.command_line
83845:
VM Arguments:
jvm_args: -Dosgi.requiredJavaVersion=1.7 -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -XX:MaxPermSize=256m -Xms256m -Xmx1024m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -Dosgi.requiredJavaVersion=1.7 -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -XX:MaxPermSize=256m -Xms256m -Xmx1024m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts 
java_command: <unknown>
java_class_path (initial): /Applications/Eclipse_4-5.app/Contents/MacOS//../Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
Launcher Type: generic

Opening the heap dump

Provided that Eclipse MAT is installed, and the heap dump ends in .hprof, the dump can be opened by going to “File → Open File…” and then selecting the heap dump. After a brief wizard (you can cancel this) you’ll be presented with a heap dump overview:

This shows a pie chart showing which classes contribute to most heap space; moving the mouse over each section shows the class name. In the above image, the 3.8Mb slice of the pie is selected and shows that it is due to the org.eclipse.core.internal.registry.ExtensionRegistry class.

To find out where duplicate objects exist, we can open the Group By Value report. This is under the blue icon to the right of the profile, under the Java Basics menu:

When this menu is selected, a dialog will be presented, which allows one (or more) classes to be selected. It’s also possible to enter more specific searches, such as an OQL query, but this isn’t necessary at first.

To find out what the duplicated strings are, enter java.lang.String as the class type:

This then shows a result of all of the objects which have a .toString() method that’s the same, including the number of objects and the shallow heap (the amount of memory taken up by the direct objects but not referenced data):

The result type is sorted from the number of objects and then shallow heap. In this case, there are 2,338 String instances that have the value true taking up 56k and 1,051 instances that have the value false. You can never be too sure about the truth. (I want to know what JEDgpPXhjM4QTCmiytQcTsw3bLOeXXziZSSx0CGKRPA= is and why we need 300 of them …)

The impact of duplicate strings can be mitigated with Java 8’s -XX:+UseStringDeduplication feature. This will keep all 2,338 instances of String but repoint all of the char elements to the same backing character array. Not a bad tune-up, and for platforms that require Java 8 as a minimum it may make sense to enable that flag by default. Of course tooling (such as Eclipse MAT) can’t tell when this is in use or not so you may still see the duplicate data referenced in reports.

What about Integer instances? Well, running new Integer() is guaranteed to create a new instance while Integer.valueOf() uses the integer cache. Let’s see how many integers we really have, by running the same Group By Value report with java.lang.Integer as the type:

Quite a few, though obviously not as memory-hungry as the Strings were; in fact, we have 11k’s worth of Integer instances on heap. This shows that small numbers, like 0, 1, 2, and 8 are seen a lot, as are MAX_VALUE and MIN_VALUE. Were we to fix it we’d likely get around 10k back on heap – not a huge amount, to be sure.

The number 100 seems suspicious; we’ve got a few of them kicking around. Plus unlike our other power-of-two numbers it seems to stick out. So how do we find where that comes from?

One feature of Eclipse MAT is to be able to step into a set of objects and then show their references; either incoming (objects that point to this) or outgoing (objects that they point to). Let’s see where the references have come from, by right-clicking on 100 and then “List Objects → Incoming References”. A new tab will be opened within the editor showing the list of Integral values, which can then be expanded to see where they come from:

This shows 5 instances, and their reference graph. The last one is the one created by the built-in Integer cache but the others all seem to come from the org.eclipse.e4.ui.css.core.dom.properties.Gradient class, via the GradientBackground class. We can open up the code to see a List of Integer objects, but no allocation:

public class Gradient {
    private final List<Integer> percents = new ArrayList<>();
    public void addPercent(Integer percent) {
        percents.add(percent);
    }

Searching for references in the codebase for the addPercent() method call leads to the org.eclipse.e4.ui.css.swt.helpers.CSSSWTColorHelper class:

public class CSSSWTColorHelper {
    public static Integer getPercent(CSSPrimitiveValue value) {
        int percent = 0;
        switch (value.getPrimitiveType()) {
        case CSSPrimitiveValue.CSS_PERCENTAGE:
            percent = (int) value
            .getFloatValue(CSSPrimitiveValue.CSS_PERCENTAGE);
        }
        return new Integer(percent);
    }
}

And here we find both the cause of the duplicate integers and also the meaning. Presumably there are many references in the CSS files to 100% in the gradient, and each time we come across that we’re instantiating a new Integer instance whether we need to or not.

Ironically if the method had just been:

public class CSSSWTColorHelper {
        public static Integer getPercent(CSSPrimitiveValue value) {
                int percent = 0;
                switch (value.getPrimitiveType()) {
                case CSSPrimitiveValue.CSS_PERCENTAGE:
                        percent = (int) value
                        .getFloatValue(CSSPrimitiveValue.CSS_PERCENTAGE);
                }
                return percent;
        }
}

then autoboxing would have kicked in, which uses Integer.valueOf() under the covers, and it would have been fine. Really, using new Integer() is a code smell and should be a warning; and yes, there’s a bug for that.

And as is usual in many cases, Lars Vogel has already been and fixedbug 489234:

 - return new Integer(percent);
 + return Integer.valueOf(percent);

Conclusion

Being able to fix replacements for integers isn’t specifically important in itself, but realising that new Integer() (and doubly so, new Boolean()) is an anti-pattern is the educational point here. Generally speaking, if you have new Integer(x).intValue() then replace it with Integer.parseInt(x) and otherwise replace it with Integer.valueOf(x) instead.

In fact, if you’re returning from a method that is declared to be of type Integer or assigning to a field of type Integer then you can just use the literal value, and it will be created to the right type with autoboxing (which uses Integer.valueOf() under the hood). However if you’re inserting values into a collection type then instantiating the right object is a better idea.

If you know your value is outside of the cached range then using new Integer() will have exactly the same effect as calling Integer.valueOf(). Under JIT optimisation for hot methods you’d expect them to have the same effect. However note that Integer.valueOf() could change over time (for example, to cache MAX_VALUE) which you won’t be able to take advantage of if you use the constructor. Plus, there’s also a run-time switch -Djava.lang.Integer.IntegerCache.high=1024 if you wanted to extend the cached values to more integers. This is only currently respected for Integer types though; other primitive wrappers don’t have the same configuration property.

In addition, being able to look for duplicate objects in memory and discover where the memory heap lays is an important tool in understanding where Eclipse’s memory is used and what can be done to try and resolve some of those issues. For example, digging into a stray object reference resulted in discovering that P2 has its own Integer cache despite having a minimum dependency of Java 1.5 which added the Integer.valueOf(). Hopefully we can remediate this.

Oh, and that JEDgpPXhjM4QTCmiytQcTsw3bLOeXXziZSSx0CGKRPA= string? It turns out that it’s a value in META-INF/MANIFEST.MF for the SHA-256-Digest of a bunch of (presumably empty) resource files in the com.ibm.icu bundle:

Name: com/ibm/icu/impl/data/icudt54b/cy_GB.res
SHA-256-Digest: JEDgpPXhjM4QTCmiytQcTsw3bLOeXXziZSSx0CGKRPA=

Name: com/ibm/icu/impl/data/icudt54b/ksb_TZ.res
SHA-256-Digest: JEDgpPXhjM4QTCmiytQcTsw3bLOeXXziZSSx0CGKRPA=

In fact, you might not be surprised to know that there are 300 of them :)

$ grep SHA-256-Digest MANIFEST.MF | sort | uniq -c | sort -nr
 300 SHA-256-Digest: JEDgpPXhjM4QTCmiytQcTsw3bLOeXXziZSSx0CGKRPA=
  65 SHA-256-Digest: Ku5LOaQNbYRE7OFCreIc9LWXXQBUHrrl1IhxJy4QRkA=
  61 SHA-256-Digest: TFNUA5jTkKhhjE/8DQXKUtrvohd99m5Q3LrEIz5Bj4I=
  53 SHA-256-Digest: p7PURP2WmyEtwG26wCbOYyN+8v3SjhinC5uUomd5uJA=
  53 SHA-256-Digest: fTZLTXXbc5Z45DJFKvOwo6f5yATqT8GsD709psc90lo=
  49 SHA-256-Digest: SiArmu+IqlRtLpSQb6d2F5/rIu6CU3lnBgyY5j2r7s0=
  49 SHA-256-Digest: A5xl6s5MaIPeiyNblw/SCEWgA0wRdjzo7e7tXf3Sscs=

It turns out that while investigating one optimisation you find another potential for optimisation. The manifest parser stores the manifest for the bundles, which has both the main section (where the interesting parts of the manifest live) as well as all of the other sections (including their signatures). I’m not sure that it’s really needed; it was introduced in 865896 and the only place it’s used is to attempt to capture a per-directory Specification-Title.

Since this isn’t largely used by OSGi, if modifying the runtime to not store the hash data, we can save ½Mb or so of redundant strings, though the other savings can bring into a couple of megabytes or so:

Whether this is an optimisation that can be applied, the discussion is at bug 490008.

vert.x project: Vertx 3 and Azure cloud platform tutorial

$
0
0

Vert.x 3.2.1 applications can quickly be deployed on Microsoft Azure. Deployment is independent of your build so it is all about configuration.

About Azure

Azure by design does not support multicast on the network virtualization level, however all virtual machines defined on the same group are deployed on the same network (by default), so TCP-IP discovery can be enabled and quickly setup to form a cluster.

This how you would deploy your app:

  1. create a fat-jar with your app
  2. create a cluster.xml with tcp-ip discovery
  3. run your app with: cp folder_of_your_cluster_xml_file -cluster -cluster-host VM_PRIVATE_IP

Screencast

The following screencast explains how you can do this from scratch:

Don’t forget to follow our youtube channel!

Frank Appel: OS X Sprout of the Ergonomic Eclipse Theme Clean Sheet

$
0
0

Written by Frank Appel

Early enough to pass as an easter gift the latest update of our ergonomic Eclipse theme Clean Sheet comes in with some great enhancements. While OS X support is leading the way there are also Windows-specific improvements in the form of FlatScrollBar overlays for text editors. This post gives a short overview of the most important innovations of the feature’s new version (0.3).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Ergonomic Eclipse Theme for OS X

Probably the most remarkable supplement that comes with the latest version is the Mac OS X support of Clean Sheet. Approached upon the subject on various occasions it appeared worthwhile to provide an ergonomic Eclipse theme version that adopts the look and feel concept as well as the Source Code Pro font incorporation from its Windows relative.

Ergonomic Eclipse Theme OS X Support

FlatScrollBar Overlay for StyledText

Styling capabilities have been enhanced to allow adoption of StyledText widgets by the FlatScrollBar overlay mechanism on Windows 10. With this in place, SourceViewer based UI parts like code editors or console content now fit nicely in the overall look and feel of the Clean Sheet theme.

Ergonomic Eclipse Theme StyledText

Font and Color Scheme Adjustments of the Debug Console

The debug console now dovetails with the general Clean Sheet color scheme with respect to the background and text colors. Additionally, the console’s font family has been changed to Source Code Pro for a more consistent reading experience.

Ergonomic Eclipse Theme Debug Console

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse installation to install Clean Sheet

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. In particular, as the StyledText widget is a pretty complex component by itself there still might be some uncovered spots with the newly added scrollbar overlay mechanism. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

With this in mind, I’d like to thank all the Clean Sheet adopters for the support and wish everybody a happy easter egg hunt 😉

Title Image: © Depositphotos.com/piccola

The post OS X Sprout of the Ergonomic Eclipse Theme Clean Sheet appeared first on Code Affine.

Holger Staudacher: Open Sourcing Eclipse Dropwizard Tools

$
0
0

Here at Tasktop we really like industry standards. It's important for us to use well known tools and frameworks to make transitions between teams easier. When it comes to creating a REST API there is no way around JAX-RS in the Java world. But JAX-RS (or one of it's implementations) is not a full server stack. You always need to use some other technologies e.g. to access a database, do logging and much more.

Several projects exists providing such a stack. And one of the coolest projects in the last years is Dropwizard. Dropwizard combines some industry standard technologies like JAX-RS (Jersey), Hibernate, log4j, guava, Jetty (and some more) and glues them together. We use Dropwizard here at Tasktop for several products because it just works and make things easy.

As you might know, we have a heavy Eclipse background and are using Eclipse as an IDE for all our projects. For this reason we also use Eclipse for our Dropwizard applications. While you can launch Dropwizard applications without special tooling in Eclipse, it has some drawbacks. These are:

  • You need to use system properties to point to your Dropwizard configuration.
  • You always need to terminate a previously launched instance of your application because the port used by the running instance will be occupied when launching the new instance.

Introducing the Eclipse Dropwizard Tools

To make our life easier we have created the Eclipse Dropwizard Tools. The tools contain a launcher to launch Dropwizard applications. You can select the Dropwizard configuration in the UI and it eliminates all previously launch instances before it launches a new one. So no more "address is already in use" error messages.

:auncher

Besides of this the tools contain a launch shortcut. This means you can just launch your main class as a Dropwizard Application and it will also pick the first .yml file in your project as the configuration to use.

Launch Shortcut

One other goodie we added, is a YAML editor. YAML is an essential part of Dropwizard (e.g. for the configuration). We did not develop this by ourself, instead we use the Eclipse YEdit plugin. It's a very mature YAML editor for Eclipse and makes the editing much easier.

Installation

The Dropwizard Tools are available in the Eclipse Marketplace. You can simply drag and drop the "install" badge below into your IDE to install it.

Drag to your running Eclipse workspace to install Dropwizard Tools

Alternatively you can also use the p2 repository url which is: http://tasktop.github.io/dropwizard-tools/

License and Contribution

The Eclipse Dropwizard Tools are licensed using the Apache 2.0 Software License and hosted as a github project. Feel free to open issues and create some pull requests. We would love to see your contribution and every feedback is welcome!

Eclipse Announcements: Last week to complete the IoT Developer Survey

$
0
0
IoT developer? Take a few minutes to complete the survey. Deadline to participate is March 25.

Kai Kreuzer: Semantic Interoperability in the Internet of Things

$
0
0
I had the honor to participate in a workshop on IoT Semantic Interoperability (IOTSI) organized by the Internet Architecture Board (IAB) and hosted by Ericsson in Santa Clara.
There is no doubt that interoperability is a huge issue today and the idea of the workshop was to analyze the situation, define potential ways forward and especially to bring relevant people and organisations together to ignite discussions and cooperations. A huge number of people applied for workshop and many had to be rejected in order to keep the workshop at a reasonable size.

As a result, there were representatives from many major organizations and corporations:
Allseen Alliance, ARM, Deutsche Telekom (me), Eclipse IoT (me as well), Ericsson, Google, Huawei, IETF, Microsoft, NIST, OMA, Open Connectivity Foundation (OCF), Open Geospatial Consortium (OGC), Oracle, SmartThings, ZigBee Alliance, and many more.


A natural reflex when being confronted with the heterogeneity in IoT is to ask for establishing a standard. One of my favorite xkcd comics nicely illustrates this:

https://xkcd.com/927/

Interestingly, the reality is even worse. With IoT on the top of the Gardner's hype cycle, new consortiums are created at a speed never seen before - market consolidation has clearly not yet started.

Significant efforts already went into every single standard/consortium/product - clearly no one is willing to give up on this investment and nobody expects this from the others for the same reason. The discussion is therefore more about translation between the different ecosystems and this makes peer-to-peer communication schemes across systems very unlikely, since they have different data models, interaction models and different security mechanisms, which are simply incompatible. As detailed out in my IOTSI position paper, there is clearly a need for an intermediator like a home gateway.

Major parts of the discussion were in my opinion more on the technical interoperability (how to send an ON/OFF boolean value from one system to another) than on the semantics. I would claim that Eclipse SmartHome is quite a step ahead of many others as it offers technical interoperability of dozens of systems out of the box and it has an architecture that was designed to support exactly this. When briefly demoing the Tesla binding, I was asked how realistic this is as people seemed to believe that this is merely a marketing showcase. Being able to answer that it is productively used by people (e.g. for smoothly working with an EV charger) and that it addresses their real needs is something where Eclipse SmartHome clearly stands out from many research projects or consortium works.

Nonetheless, from my point of view we have so far only reached a technical interoperability, since the developer, system admin or end user is still the only one that knows the semantics of certain actions, like e.g. what a switch in the app REALLY does (no, it does not switch a smart plug, it switches e.g. the radio plugged into it) - this meaning is not formally available to algorithms, but exactly this is required to bring more advanced features, such as voice control, machine learning or artificial intelligence - and it is a prerequisite to achieve semantic interoperability.

One thing I have learned is that "ontology" is a word most people are afraid of and thus its use is avoided as much as possible. The reason for this is probably that it has many facets that make its concept fuzzy and difficult to grasp and that it feels that you need to be a scientist to deal with it. Interestingly, these are exactly the problems, an ontology tries to solve: It captures a commonly agreed vocabulary, which makes all the implicit assumptions, that are inherent to a natural language, explicit.

An ontology is therefore the foundation that is needed for semantic interoperability, since it makes sure that the meaning of words is the same across different systems. TNO did a major work in this regard on behalf of the European Commission in the domain of smart appliances by extracting the commonalities of more than 20 established systems and creating the SAREF ontology out of it. The ownership of SAREF is currently transferred to ETSI and it will play an important role in the semantic definitions within the global oneM2M specification. It therefore seems to be a promising candidate for introducing semantics in Eclipse SmartHome.

An interesting aspect of providing semantical information to applications is that the notion of a device becomes almost irrelevant. As argued in my position paper, on an application level we are rather talking about services and information - exactly the same way as you would regard any other web service on the Internet. The difference for IoT applications is the fact that the services have locality. The "where" is therefore vitally important and it must be an integral part of any IoT semantic.

Introducing semantical concepts is going to be an interesting way forward for Eclipse SmartHome and although there will be challenges, I am confident that something great will come out of it. If you are an expert in this field and want to help on reducing friction in IoT interoperability, please join me on the journey!

Kaloyan Raev: Why Does Canceling Operations in Eclipse Never Work?

$
0
0
Well... saying "never works" is too extreme, so let's say "does not work often enough". Often enough, so the majority of users do not trust the red square button for canceling background operations.

Let's have a look at some quotes from a famous web site for collecting (mostly negative) feedback about Eclipse.
When I cancel a task, it hangs and ends up taking longer than it would have taken to let it finish.
Cancelling never works. Trying to build a project. It get's stuck. I cancel it. It cancels for 10 minutes. I have to force quit it again.
Why in gods name do you have a cancel task option if it's never going to cancel the @#$% task?? Is this some kind of sick joke?
There is also a blog post written back in 2006, which gives a more detailed picture of a user experience with the cancel button.

Why does Eclipse provide a cancel button that does not work?


Let's have a look how the Cancel Operation button is implemented. When you click on the red square button, the Eclipse Platform raises a "cancellation" flag for the running background operation. Then it is up to the latter to check if that flag is raised and terminate itself.

In other words, the Eclipse Platform has no power to terminate background operations, but only to send them a request for cancellation. If the operation is implemented in a proper way, i.e. frequently checks for the cancellation flag, it will promptly terminate itself. Alternatively, a poor implementation may totally miss checking the cancellation flag and finish as if the user has not pushed the cancel button.

In an even worse scenario, the background operation may check for the cancellation flag, but instead of terminating itself immediately, it may try reverting everything it has done so far. While this may be a valid approach for some use cases where keeping data consistency is critical, most of the time it is just an over-engineering. This way an operation that was canceled in the middle of the execution, may take longer than if it has not been canceled at all. This leads to even more frustrating user experience.

What is the solution?


Unfortunately, there is no direct solution for you as a user to apply to your IDE. The issue is caused by a weak code implementation in the plugins providing the background operations and it must be fixed there. There is no magic fix that can be implemented in the Eclipse Platform alone.

However, there is still something you can do:
  1. Report the issue - yes, please open a bug if you stumble upon an operation that does not terminate promptly when you hit the cancel button. This is a good small step to make a difference.
  2. Keep your Eclipse up-to-date with the latest version of all plugins with the hope that this kind of issues will be resolved over time.
If you are an Eclipse plugin developer then you should design carefully your background operation, so users are able to cancel them. It is natural to focus on the happy path when implementing a new operation. But it won't be always the case where users will trigger your operation and will patiently wait for it to finish. Quite often users will realize they've trigger your operation accidentally, or it is taking longer than expected and they don't want to wait for it any longer, or they just want to do something else like saving a file or shutting down the IDE, but your operations is blocking them, or... tons of other reason that may make users want to cancel your operation.

And if users cannot cancel your operation within a few seconds, they will open the Task Manager and will kill the IDE. Which is a lose-lose situation - neither your operation will be completed, nor the user will be happy. So, give the user the chance to win :-)

Code tips on improving the implementation of background operations


The most fundamental thing to make your background operations responsive for cancellation is to check if the cancellation flag has been raised. You should have already been provided with an instance of IProgressMonitor in your operation's implementation, whether it is a Job, a WorkspaceJob, a WorkspaceModifyOperation, etc. Checking for the cancellation flag is simply calling the the monitor's isCanceled() method.

The below code examples check for the cancellation flag and interrupts the operation's workflow by returning the CANCEL_STATUS.
if (monitor.isCanceled()) return Status.CANCEL_STATUS;
Checking for the cancellation flag should be done as often as possible. This check is a cheap operation and there should not be any concerns about the performance. In the end, if you have a long running operation, it is more important for users to cancel it promptly than having it a few milliseconds faster.

Very often long running operations are processing lots of items in a loop. As the list of items may grow unpredictably long, there should be a check for cancellation inside the loop on every iteration. This ensures that the operation can be canceled promptly regardless of the number of items that are processed. See the example below:
while (hasMoreWorkToDo()) {
// do some work
// ...
if (monitor.isCanceled()) return Status.CANCEL_STATUS;
}
return Status.OK_STATUS;
Another common issue with unresponsive background operations is when they execute an external process or a long I/O operation and block on it waiting to finish. Waiting on external processes or I/O operations should no be done indefinitely. You should take advantage of any API that allows waiting for a limited amount of time. This way you can wait just for a short time (e.g. one second), then check if your operation is canceled, and if not, wait again for a short time. Below is an example how to wait for an external process to finish and check if your operation is canceled at the same time.
while (!process.waitFor(1, TimeUnit.SECONDS)) {
// process is still running
// check if the operation has been canceled
if (monitor.isCanceled()) {
process.destroy();
return Status.CANCEL_STATUS;
}
}
Finally, there is an old but gold article on concurrency in Eclipse - On the Job: The Eclipse Jobs API. I highly recommend it to every Eclipse plugin developer.

Triquetrum project: Using custom figure definitions in Triquetrum

$
0
0

A main deliverable of Triquetrum is a graphical editor based on Graphiti to design and run Ptolemy II workflows. Graphiti is built on GEF but provides an own EMF-based model and API to define the graphical elements in a diagram, and their links to corresponding domain objects. The core concepts in this API are PictogramElements and GraphicsAlgorithms. These get rendered as Draw2d figures.

Whereas the API-based approach for defining figures offers many nice features (check the Graphiti site for a good introduction to the benefits and underlying ideas), it does result in a lot of required coding when the editor needs to support many different graphical shapes. And this happens to be the case for Triquetrum, as each type of workflow component (actors in Ptolemy II) can have a custom icon. Furthermore Ptolemy II comes with a large library of existing actor implementations, and most of them have an existing custom icon definition, either stored in a MoML XML file or in SVG fragments.

As a consequence we've investigated the feasibility of reusing the existing SVG and MoML icon definitions as custom figure definitions in a Graphiti-based editor. The text below describes in detail how this has been implemented.

The text is quite long, but should be of interest for anyone working with Graphiti and with an interest in supporting externally defined shapes, e.g. in SVG. The described mechanism can be reused for integrating other graphical "languages" with Graphiti, i.e. besides SVG.

Actors and their diagram shapes

In Ptolemy II most components of a workflow model are actor instances. Actors can receive data via input ports, perform some well-defined processing and produce results via their output ports.

The default shape for an actor in a Triquetrum workflow diagram is an evolution of the EClass shape in Graphiti's tutorial :

Default actor shape

As in the Graphiti tutorial, the "main" shape is a rounded rectangle. The differences with the tutorial shapes are :

  • adding input and output ports
  • showing configuration parameters (will probably change)
  • adding a small icon image at the top-left

Our goal is for externally-defined custom figure definitions to replace the rounded rectangle and its contents, while maintaining similar port layouts. For example, for Ptolemy II's MovingAverage actor :

MovingAverage actor shape

This actor shape is defined in a file similar to (just a part of it):

<?xml version="1.0" standalone="no"?><!DOCTYPE property PUBLIC "-//UC Berkeley//DTD MoML 1//EN""http://ptolemy.eecs.berkeley.edu/xml/dtd/MoML_1.dtd"><propertyname="MovingAverageIcon"class="ptolemy.vergil.icon.EditorIcon"><propertyname="rectangle"class="ptolemy.vergil.kernel.attributes.RectangleAttribute"><propertyname="_location"class="ptolemy.kernel.util.Location"value="[-35.0, -20.0]"></property><propertyname="width"class="ptolemy.data.expr.Parameter"value="60"></property><propertyname="height"class="ptolemy.data.expr.Parameter"value="40"></property></property><propertyname="line"class="ptolemy.vergil.kernel.attributes.LineAttribute"><propertyname="_location"class="ptolemy.kernel.util.Location"value="[-15.0, -9.0]"></property><propertyname="x"class="ptolemy.data.expr.Parameter"value="15.0"></property><propertyname="y"class="ptolemy.data.expr.Parameter"value="0.0"></property></property>
...
</property>

Graphiti support for custom figures

Information on how to plug-in/reuse externally defined custom figures was obtained via the Graphiti forum, e.g. at :

and via a short code example at Code and Stuff:

This boils down to 3 steps :

  1. The AddFeature for the respective model element type should create a PlatformGraphicsAlgorithm, i.o. composing the shape using standard Graphiti GraphicalAlgorithms, using something like :
   GraphicsAlgorithm extFigure = Graphiti.getGaCreateService().createPlatformGraphicsAlgorithm(containerGA, someId);
  1. You need to provide an implementation of a IGraphicsAlgorithmRendererFactory and register it in your editor's DiagramTypeProvider by overriding the dedicated method, e.g. :
    @Override
    public IGraphicsAlgorithmRendererFactory getGraphicsAlgorithmRendererFactory() {
     if (factory == null) {
       factory = new TriqGraphicsAlgorithmRendererFactory();
     }
     return factory;
    }
  1. That factory must create custom implementations of org.eclipse.draw2d.Shape marked with an interface IGraphicsAlgorithmRenderer. These are invoked during the diagram's rendering, to provide the desired custom shape.

Applying this in Triquetrum

In the solution approach described above, there is an indirection between step 1 and steps 2 & 3. The AddFeature's created PlatformGraphicsAlgorithm just provides an abstract wrapper on what will eventually get rendered, but is unable itself to pass concrete shape information back to the Addfeature. The factory and shape implementations only get invoked during the rendering of a Diagram and it's only at that moment that we're able to interpret the shape's definition in detail.

Concretely this means that the AddFeature has no way to obtain detailed formatting/sizing information for the specific shape definition (unless we start duplicating specific parsing logic etc in there, which we don't want to). This is a deviation from the approach for Graphiti "native" shape construction, where the AddFeature is responsible for setting the actual sizes. And it leads to problems to get the sizing and layout working for the container shape that must combine the port figures with the externally defined shape...

A simple solution would be to enforce standard sizes for all model elements, but this is not acceptable in our case. To be able to handle arbitrary sizes, while maintaining the encapsulation of the actual shape definition technology, the custom shapes in Triquetrum are able to force a resizing of their Graphiti container shape. The details on how this is done are described below.

In Triquetrum, the contents of the editor palette are defined using extensions. Two of the elements that must be specified for a palette entry are :

  • iconType : svg, ptolemy, img
  • icon : a plugin relative path to the image file, to be used as the icon for this palette entry. For img, the icon file is assumed to contain a 16x16 image that will be shown at the top-left corner of a default shape for the corresponding diagram element. For svg or ptolemy the icon file is assumed to define the complete shape of the diagram element.

The configured palette entries are mapped to ModelElementCreateFeatures in our TriqFeatureProvider. When a model designer drag-n-drops an actor from the palette on the model canvas, the properties of the palette entry get passed along from the ModelElementCreateFeature to the relevant AddFeature. Then for example the ActorAddFeature checks the iconType property that was passed along and determines which shape generation approach must be triggered.

When the actor has an externally defined shape, the ActorAddFeature invokes :

protected GraphicsAlgorithm buildExternallyDefinedShape(IGaService gaService, GraphicsAlgorithm invisibleRectangle,
                                    ContainerShape containerShape, String iconType, String iconResource) {

    GraphicsAlgorithm extFigure = Graphiti.getGaCreateService().createPlatformGraphicsAlgorithm(invisibleRectangle,
                                    iconType);
    Property property = MmFactory.eINSTANCE.createProperty();
    property.setKey("iconType");
    property.setValue(iconType);
    extFigure.getProperties().add(property);

    property = MmFactory.eINSTANCE.createProperty();
    property.setKey("iconResource");
    property.setValue(iconResource);
    extFigure.getProperties().add(property);

    // We need to set an arbitrary non-0 size to get things working in the shape implementations.
    // This size will be changed as needed by the figure, depending on its actual defined size.
    gaService.setLocationAndSize(extFigure, SHAPE_X_OFFSET, 0, 40, 40);
    return extFigure;
}

This triggers the creation of our custom shape implementation, and passes the icon definition information along.

When the diagram gets rendered, Graphiti triggers the creation of the actual custom shape via the TriqGraphicsAlgorithmRendererFactory. This one then creates a shape instance of the right type, matching the configured iconType :

public class TriqGraphicsAlgorithmRendererFactory implements IGraphicsAlgorithmRendererFactory {

  @Override
  public IGraphicsAlgorithmRenderer createGraphicsAlgorithmRenderer(IRendererContext rendererContext) {
    String iconType = null;
    for (Property property: rendererContext.getPlatformGraphicsAlgorithm().getProperties()) {
      if("iconType".equalsIgnoreCase(property.getKey())) {
        iconType = property.getValue();
        break;
      }
    }
    switch(iconType) {
    case TriqFeatureProvider.ICONTYPE_PTOLEMY :
      return new PtolemyModelElementShape(rendererContext);
    case TriqFeatureProvider.ICONTYPE_SVG :
      return new SvgModelElementShape(rendererContext);
    default :
      return null;
    }
  }
}

Finally, the shape implementation must read its icon definition and must transform it in a visible shape.

SVG implementation details

SVG-based definitions are handled by org.eclipse.triquetrum.workflow.editor.shapes.svg.SvgModelElementShape, which uses Batik and a simplified version of GMF's SVGFigure.

As an example, Ptolemy II's CSVReader actor has an icon defined as :

<svgx="-25"y="-20"width="50"height="40"style="overflow: visible"><rectx="-25"y="-20"width="50"height="40"style="fill:white"/><polygonpoints="-15,-10 -12,-10 -8,-14 -1,-14 3,-10 15,-10 15,10, -15,10"style="fill:red"/><textx="-11"y="4"style="font-size:11; fill:white; font-family:SansSerif">CSV</text></svg>

which should result in something like :

CSVReader actor shape

Rendering the actor shape involves following steps : 1. Determine the bounds of the defined shape 2. Update the size of the container shape to match the determined bounds 3. Translate SVG coordinates to a top-left origin of (0,0) 4. Paint the SVG figure

Ideally the SVG definition would only need a single parsing from which we could obtain size info and trigger the rendering. The current usage of GMF's SVGFigure and Batik does not support this, so there's room for optimization here!

Determining the SVG shape's bounds

SvgModelElementShape does this as follows, using Batik :

private Rectangle determineExtremeBounds(String uri) throws IOException {
  LOGGER.trace("SVG determineExtremeBounds - entry - for {}", uri);
  String parser = XMLResourceDescriptor.getXMLParserClassName();
  parser = parser != null ? parser : "org.apache.xerces.parsers.SAXParser";
  SAXSVGDocumentFactory factory = new SAXSVGDocumentFactory(parser);
  Document doc = factory.createDocument(uri);
  UserAgent agent = new UserAgentAdapter();
  DocumentLoader loader = new DocumentLoader(agent);
  BridgeContext context = new BridgeContext(agent, loader);
  context.setDynamic(true);
  GVTBuilder builder = new GVTBuilder();
  GraphicsNode root = builder.build(context, doc);
  int height = (int) root.getGeometryBounds().getHeight();
  int width = (int) root.getGeometryBounds().getWidth();
  int minX = (int) root.getGeometryBounds().getMinX();
  int minY = (int) root.getGeometryBounds().getMinY();

  Rectangle result = new Rectangle(minX, minY, width, height);
  LOGGER.trace("SVG determineExtremeBounds - exit - for {} - bounds {}", uri, result);
  return result;
}

Updating the size of the container shape

This is where the custom figure implementation must reach back into Graphiti to obtain a ResizeShapeFeature for its container shape and to execute it on Graphiti's editing domain's command stack. As the shape rendering is done outside of the normal Graphiti feature processing flow, there's some extra work required here to set that up :

  protected void setInitialSize(GraphicsAlgorithm ga, int width, int height) {
    if(!getGaProperty("renderDone").isPresent()) {
      final TransactionalEditingDomain editingDomain = dtp.getDiagramBehavior().getEditingDomain();
      final IFeatureProvider fp = dtp.getFeatureProvider();

      final RecordingCommand command = new RecordingCommand(editingDomain, getIconURI()) {
        private IStatus result = null;

        @Override
        protected void doExecute() {
          try {
            GraphicsAlgorithm parentGA = ga.getParentGraphicsAlgorithm();
            ResizeShapeContext context = new ResizeShapeContext((Shape) parentGA.getPictogramElement());
            // the extra 15 is to provide space for the port figures
            context.setSize(width+15, height);
            context.setX(parentGA.getX());
            context.setY(parentGA.getY());
            context.putProperty("forced", "true");
            IResizeShapeFeature resizeShapeFeature = fp.getResizeShapeFeature(context);
            if(resizeShapeFeature!=null) {
              resizeShapeFeature.resizeShape(context);
            }
            addGaProperty("renderDone", "true");
            result = Status.OK_STATUS;
          } catch (OperationCanceledException e) {
            result = Status.CANCEL_STATUS;
          }
        }

        @Override
        public Collection<?> getResult() {
          return result == null ? Collections.EMPTY_LIST : Collections.singletonList(result);
        }
      };

      // Execute (synchronously) the defined command in a proper EMF transaction
      editingDomain.getCommandStack().execute(command);
    }
  }

Translating the SVG coordinates

The SVG definition is free to use negative (x,y) coordinates. This is not the case for rendering something in a draw2d shape, which assumes that the top-left corner is at the (0,0) origin.

To support a translate transformation to the (0,0) origin, and at the same time to avoid bringing in loads of GMF dependencies, the GMF SVGFigure has been extracted and simplified a bit. And we have added support for coordinates translation :

    figure.setTranslateX(-minX);
    figure.setTranslateY(-minY);

Paint the figure

Finally, we can just invoke SVGFigure.paint(). SvgModelElementShape.fillShape() performs the 4 steps as follows :

@Override
protected void fillShape(Graphics graphics) {
  LOGGER.trace("SVG fillShape - entry - for {}", getIconURI());
  try {
    svgShapeBounds = svgShapeBounds != null ? svgShapeBounds : determineExtremeBounds(getIconURI());
    int minX = svgShapeBounds.x;
    int minY = svgShapeBounds.y;
    int width = svgShapeBounds.width;
    int height = svgShapeBounds.height;
    setInitialSize(ga, width, height);

    SVGFigure figure = new SVGFigure();
    // move SVG figure from its defined top-left to origin (0,0) top-left
    figure.setTranslateX(-minX);
    figure.setTranslateY(-minY);
    figure.setURI(getIconURI());
    figure.setBounds(this.getBounds());
    figure.paint(graphics);
  } catch (IOException e) {
    LOGGER.error("Error drawing SVG shape "+getIconURI(), e);
  }
  LOGGER.trace("SVG fillShape - exit - for {}", getIconURI());
}

Remark that SVGFigure.paint() results in an SWT Image. So the result is no longer scalable/vectorial. The Ptolemy MoML icon definitions on the other hand have been implemented on corresponding draw2d concepts, and behave better as a consequence. If anyone would be willing to implement a full/partial SVG-draw2d bridge that would be fantastic of course!

Conclusion

We have described an approach to use Graphiti's available mechanisms for rendering custom figures to reuse large collections of existing externally defined figures. The most important issue to address was to support custom sizes in those figure definitions. This has been addressed by having the custom figure implementations invoke a Graphiti resize feature.

At this stage of Triquetrum, the above approach caters to our needs. But there is still room for improvement on several aspects :

  • SVG figures render as plain SWT Images and thus loose their nice scaling.
  • We need better control on the frequency of invoking the fillShape() method. Graphiti (or GEF or Draw2D?) seem to be invoking it way too much, and this induces a performance overhead.
  • It could be a good idea to provide shortcuts in Graphiti to allow accessing the custom figure information from inside the AddFeature implementation. This would lead to more uniformity between handling Graphiti native shapes and custom figures, and would avoid the need for the figure implementations to "call back" into Graphiti features and command stacks.
  • ... any other ideas?

The full source code can be found at the Triquetrum Github repository.

You can discover and follow Triquetrum via :

  • The project site at : https://projects.eclipse.org/projects/technology.triquetrum
  • Source repository at : https://github.com/eclipse/triquetrum
  • Mailing list : https://dev.eclipse.org/mailman/listinfo/triquetrum-dev

Ekkehard Gentz: Qt for Mobile x-platform Development

$
0
0

My Session at MobileTechCon went very well. The session was recorded and will later be available here. Because this session was hold in german I was asked for an english version. So I translated the slides and did an english webcast – now online:

Probably you know that my main work is developing mobile business apps for BlackBerry 10. This still is valid. But since the BlackBerry PRIV is available running Android 5.1.1 (soon 6.0) I was asked from customers to do x-platform apps running BlackBerry 10, Android and iOS (later also Windows10)

I’m not a fan of web or hybrid apps and like to develop native apps. On the other side I really don’t want to develop apps for all platforms in a native but different way with different programming languages, IDE’s …

My BlackBerry 10 development is done with Cascades UI Framework using QML to describe the UI and C++/Qt 4.8 for business logic, network and so on. I really like the easy way to design complex UI with QML.

Just last week Qt 5.6 was released and I did some tests. Qt 5.6 contains a technical preview of qt.labs.controls giving me all the UI controls I need for mobile apps UI and Navigation. Also there’s now a Google Material and Microsoft Universal Style making the controls look very nice. New controls removed all the heavy parts from previous Qt Quick Controls 1 and event handling now is done in C++. (Cascades always did this)

Starting with Qt 5.6 High DPI support is available for all platforms. All this new tech stuff and also a new Startup / Indie Dev Offer motivated me to start with Qt for mobile x-platform development. You don’t have to use the commercial-license – there are many ways also to use the Open Source Licenses even without making your own app Open Source. Qt exists since 20 years and is FREE and Open Source software.

Get a first overview from my Video. A new blog series will start here next days to go much deeper into the details and to give you recipes HowTo start with Qt.

Qt allows me to re-use code for x-platform development:

03_platform_reuse

Next time speaking about Qt for Mobile:

MFS-Logo

2016-05-31 – cu in Stuttgart.

 

 


Filed under: BB10, C++, Cascades, mobile, Qt

Eclipse Announcements: Eclipse Newsletter - Big Geo Data at LocationTech

$
0
0
Learn more about the LocationTech working group and it's projects, GeoMesa, GeoWave, and Whiskers.

Kichwa Coders: Eclipse: Open Technology for Everything and Nothing in Particular

$
0
0

Eclipse is so much, much more than an IDE these days. For starters, there are many exciting technologies being developed by the Internet of Things, Science and LocationTech groups. We really need to showcase these to the wider world. This was the excuse to have an event in London bringing together these different technologies and communities for a night of tech and merriment.

The event Eclipse Converge: blending LocationTech, IoT & Science was very generously hosted by Geovation, the Innovation Hub from the Ordnance Survey. We were very grateful for all the team there for help with organising and ensuring this event went off without a hitch. They have a terrific space and laid out quite a spread of food and drink, which set the scene well for our six speakers. Here is the story of the evening, partly-told by the lovely tweets from the community.

GeoGig: A Git-Like Approach To Geospatial, Joe Allnut, Ordnance Survey

Representing the thriving Eclipse LocationTech group, Joe Allnut talked about GeoGig, an Eclipse project which is essentially git for geospatial data. A very handy tool considering the Ordnance Survey’s maps database has about 10k changes a day. It was great seeing the tool in action to help visualise the map changes during construction at London’s Olympics site.

Data Analysis and Visualisations with DawnSciJacob Filik, Diamond Light Source

Onto science, and the great work being done by Diamond Light Source, founding members of the Eclipse Science group. Jacob Filik gave us a great insight into the fascinating research carried out at the synchrotron and demonstrated the analytic and visualisation prowess of DawnSci, an Eclipse project which has also spun out the Eclipse January project.

A Brief Overview of Eclipse IoT: What’s There? What’s Still Needed?, Boris Adryan, ThingsLearn

Time for some IoT next with Boris Adryan. Boris set about breaking down the Eclipse IoT landscape for us, highlighting useful projects such as Paho and Mosquitto. What was great about this talk was the insight and feedback as to what just works and what is still rough around the edges, as well as the wishlist (documentation please!). Check out the entertaining slides here (really what font is that?).

Indoor Positioning with Bluetooth Low Energy and Espruino JavaScript,Gordon Williams, Espruino

This was an IoT meets LocationTech talk. For most, it was our first look at a BBC Micro:bit, running Javascript no less, talking to iBeacon hardware for an indoor positioning application! This is all achieved using the Espruino interpreter, with Gordon drawing stirring parallels with the BBC Micro and its BASIC interpreter. Next generation indeed, with all the fun and drama of a live demo!

The Oxford Flood Network, David Simpson, Nominet

The Internet of Things is nothing if it’s not solving real world problems. So enter the real-world problem of flooding rivers and the millions of pounds of damage they cause. David Simpson gave us a highly energetic overview of the technology behind the flood network, from sensors to cool data visualisations. Even better was the example of how the underlying technology was quickly adapted and repurposed for a car-park monitoring application, emphasising the reusability behind the technology.

Open Source and the Price of Butter, Andrea Ross, Eclipse Foundation

Wrapping things up under the open-source umbrella of the Eclipse Foundation, Andrea Ross gave her talk with its intriguing reference to butter. All was revealed as we learnt about the ‘Internet of Cows’ and the crucial role foundations like the Eclipse Foundation play not just in the governance of open source, but also bringing the community together and keeping the machinery, er, well buttered! Besides the great talk, Andrea was instrumental in making this event happen and getting us all talking to each other.

Wrap Up

The evening was wrapped up with a fun quiz with some lovely hip-flasks, donated by the Ordnance Survey, for the winners. Then it was off to the pub for some further debriefing.

Andrea also helped emphasise the variety of companies involved, further testament to the range of technologies being developed at Eclipse.

Once upon a time the Eclipse tagline was ‘An IDE for everything and nothing in particular’. Then, that evolved to ‘ A framework for everything and nothing in particular’.  It is long overdue an update to ‘Eclipse: Open technology for everything and nothing in particular’.


Wayne Beaton: Just Drag and Drop to Install

$
0
0

The Eclipse Marketplace is a pretty cool bit of software. It provides a great place for organizations and individuals to make their software available to the community. Even cooler, however, is the Eclipse Marketplace Client which lets you browse the Eclipse Marketplace and directly install new features from within the comfort of your Eclipse IDE.

MarketplaceClient

The Eclipse Marketplace Client is included in all of the products hosted on the Eclipse Downloads page and configurations realized by the installer. Open the Marketplace client via the Help > Eclipse Marketplace… menu.

I’ll admit, however, that I rarely use the Marketplace Client directly. Instead, I make use of the drag and drop feature.

Eclipse Marketplace entries have a handy “Install” button:

MarketPlaceInstall

If you drag this button from your web browser and drop it on your Eclipse IDE, it will start the installation process. You’ll have to review the changes that it proposes and likely agree to licensing terms, but after that, it all just happens. You don’t to fuss with software source (p2) sites: it just works.

The best part is that you can include those handy install links on your own web pages. Every Marketplace entry has an “External Install Button” tab that gives you the HTML to insert onto your page.

Do you miss having CVS integration in your IDE? You can install it right now:

Drag to your running Eclipse workspace to install CVS Integration

Do you need a German language pack for your Eclipse IDE? Drag and drop this und Eclipse sprechen Ihre Sprache:

Drag to your running Eclipse workspace to install Eclipse IDE Language Pack:  Deutsche

Or maybe install the Docker integration:

Drag to your running Eclipse workspace to install Eclipse Docker Tooling

For users, it couldn’t be easier to extend your Eclipse IDE in all sorts of ways.

For Eclipse Project teams, the Eclipse Marketplace is a great way to help people get and use your software. When you create a marketplace entry that pulls software from the Eclipse downloads server, that entry is automatically added into the Eclipse Project Market and is given the Eclipse Project badge.

When you’re blogging about the cool new features in your latest release, why not include a handy install button?


Mike Milinkovich: Eclipse Tooling Platforms

$
0
0

Two weeks ago at EclipseCon, the Eclipse Che project announced its 4.0 release. This announcement is the first major result from the Eclipse Cloud Development strategy we announced eighteen months ago. Eclipse Che is an innovative new IDE platform which has been designed specifically for the needs of web and cloud developers, offering a whole new way to think about developer workspaces in a container world. Tyler Jewell, the Che project leader and CEO of Codenvy did a keynote at EclipseCon North America where he welcomed IBM, Microsoft, Red Hat, and SAP on stage to show what they are already doing with the Che technology. The reaction to the announcement from developers, adopters, and the press has been amazing.

In short, Eclipse Che is on track for becoming a huge success.

However, as with many things in life success in one area raises questions about others. In particular we’ve heard some questions about what this all means for the Eclipse JDT IDE that developers have known and loved for the past fifteen years. TL;DR: Eclipse Che and the Eclipse IDE platform are complementary to one another, and both are going to be more successful because of each other.

More details:

Is Eclipse Che going to replace the Eclipse IDE?
No. It’s a different project, staffed by a different team. Remember, this is open source where the community is the capacity. There is obviously some overlap between both, but they have distinct goals, advantages and benefits, so the Eclipse IDE platform remains relevant and actively developed.

Is Eclipse Che and the Eclipse IDE interoperable?
Partially. There are ways to move many projects between the Eclipse IDE and Eclipse Che. We generally see many opportunities to make it simpler for developers to smoothly transition from local to distributed development and back. There are generally more opportunities for the projects to collaborate together than to compete.

So there are 2 IDE platforms in the Eclipse Community?
The Eclipse Community actually has three platforms for building tooling extensions. Eclipse RCP, Eclipse Orion, and Eclipse Che. Eclipse RCP’s desktop plug-in model and structure is widely adopted and broadly understood. Eclipse Orion provides a client-side plugin framework to enable web tooling and editor extensions. Eclipse Che builds on Orion and Eclipse JDT to create a distributed workspace and cloud IDE extension platform. These platforms are partially competing, and we’re fine with that.

Why is the Foundation fine with that?
The community is the capacity, and we would much rather have innovative new projects happen at Eclipse than elsewhere. The Eclipse Foundation is fine with internal competition. Both the Foundation and the Community know that competition can bring innovation. Moreover, Eclipse Che and the Eclipse IDE have different objectives that drive them to create different extension architectures.

What are the main differences?
Che defines a workspace to include all of the dependencies necessary to let a developer contribute without first installing software. The Che workspace includes a runtime, project files, and a cloud IDE. The nature of workspaces makes them portable and shareable. Che provides a server that hosts multiple workspaces for a group.The Eclipse IDE targets the developer workstation with tighter integration to the system and more options to customize it locally.

Is this short-term, mid-term, long-term…?
We are not the ones who decide this. Developers now have one more alternative with Eclipse Che, and we’ll let them make their choices and drive the future of software development. Let’s ask this again in 5 years 😉


Filed under: Foundation

Cedric Brun: Eclipse Modeling Package Neon M6 is ready for testing

$
0
0

The teams have been working hard and pushed many changes. I’ve been tweeting those as they went but I figured that compiling a list into a blogpost could be useful. Here are some noteworthy, this is not an exhaustive list and please if you think I missed something, reach to me on the mattermost instance before M7.

Modeling Amalgam

Thales contributed in Amalgam the building blocks to create views similar to those used in Capella. Activity ExplorerContextual Explorer

The features are now part of the Neon update site and can be consumed by other projects, starting by EcoreTools.

You’ll find some documentation on the wiki here and there, Capella’s source code is also a good starting point.

Platform/SWT

Many and important improvements in Linux/GTK3 support, notably Issue in layout of editors (“leaking” rulers).

SWT now auto-detects the scaling factor which might be needed on HiDPI screens but the linux support is not perfect and it looks like GTK is giving a hard time to the commiters here again, in the meantime if you get this result when starting Eclipse : then add -Dswt.enable.autoScale=false after -vmargsin your eclipse.ini to disable this automatic scaling.

Packaging

The modeling package is now using the brand new Solstice Theme for the welcome page. It is now explicitely focused on the package domain and it looks slicker, see:

We also moved away from the “we fix every single version of the tools in the package” to using root features. The outcome of this is that it is now possible for an end-user to upgrade parts of the package without having to wait for a full Eclipse release. Another effect of this change is that you can not update a pre-M6 EPP package to M6. You must download a fresh version. The potential downside is that the user could end-up with an installation which is mixing different streams or which is quite unexpected for us, packagers. See bug 332989 for more details.

Sirius 4.0.0M6

  • The Sirius runtime is no longer depending on the JDT.
  • Closing a modeling project is now way faster. We went from 20s to 0.6s closing a project with 1 million of model elements. The strategy to dispose a Resource instance is no longer relying on resource.unload() which was eating a lot of time and was completely useless 99% of the time. Problem was the last 1% which relied on some events being sent to clear static caches (I’m looking at you UML). The Sirius team designed a strategy which should meet the requirements of 100% of the resource implementations we know of, and as it is better to be safe than sorry a specific strategy can be contributed for the other % we don’t know of yet ;)
  • The Acceleo Query Language has had a good number of updates notably API-wise, leading to a 5.0.0 version for the corresponding packages. Make sure you ranges are now including this version.
  • SVG rendering has been improved and is now correct whatever the zoom level.
  • Before .
  • Now
  • if you install Sirius in the package to define your own modeling tool (only the runtime is pre-installed in the package), you’ll see that it is now possible to constrain on which side of a shape a bordered node (port) might be. This improvement had many supporters, just look at the number of +1 comments. Now it’s your job to give it a try and tell us if that fits your use case!

Ecore Diagram Editor

  • All the goodness from the Sirius 4.0.0M6 runtime (including SVG improvements)
  • EcoreTools can now display EReferences which are listed within the EClass.

EMF Compare

An integration with Sirius to bring graphical comparison is now available through the EMF Compare update site.

Installing the diagram comparison support

This feature bring experimental support for graphical comparison of any Sirius based modeler, in particular EcoreTools which is in the package. Note that you have to install the Egit support feature to use to be able to compare versions through the git history.

Ecore diagram comparison
Family DSL diagram comparison

Bugfixes and other improvements

That’s just the tip of the iceberg, many other changes in technologies included in the package are published with M6, thanks to everyone involved !

Now would be a good time for testing. Download the package, give it a try, report back either using the modeling channel or through bugzilla.

Stay tuned!

Eclipse Modeling Package Neon M6 is ready for testing was originally published by Cédric Brun at CTO @ Obeo on March 30, 2016.


Eclipse Scout: Eclipse Scout Neon Release

$
0
0

Starting with the upcoming Eclipse Neon release the Scout framework is directly based on Java and HTML5. This move allows projects to easily integrate popular Java frameworks and to use “Maven-by-the-books” for building purposes.

On the HTML5 side, full CSS3 support is now available and integration of modern JavaScript libraries to implement project-specific UI components can be achieved in a straight-forward way.

At the same time, great care has been taken to make sure that the existing Scout developer community feels at home right away and does not need to re-learn Scout from scratch.

Current state of the SDK Tooling

With the M6 release the most important parts of the Scout SDK are becoming available. The M5 release already offered Scout specific code completion for form fields, table columns, menus, and codes in the Java perspective. With M6 the Scout SDK also includes Scout component wizards for complete Scout forms, pages, code types and more.

To add a Scout component just select the appropriate Java package in the Eclipse Explorer view, press [Ctrl]-[N] and search for Scout as shown below on the left side:

To create a new Scout form simply select the Scout Form wizard and enter the name of the new form as shown on the right side of the illustration above. As in previous Scout releases the Scout SDK will then create all the necessary Java code for the new form including life cycle management, permissions and the corresponding service on the Scout backend server.

Adding new form fields can then be done directly in the Java editor with the Scout SDK addition to the Eclipse IDE code completion. Pressing [Ctrl]-[Space] opens the available templates where the applicable Scout components are presented first. As an example the screenshot below shows how simple it is to add new form fields.

Eclipse Help

With the M6 milestone we have added initial Eclipse Help content that should get you started quickly with the Scout package. This content is available via [F1] or the Eclipse IDE menu Help => Help Contents.

Chapter “Getting Started” describes how to create your first Hello World application.

The help chapter “Import the Scout Demo Applications” describes how to import the existing Scout Neon demo applications into your workspace. For this tutorial the Oomph import wizard is used which makes installing and running the demo applications much simpler than in the past.

Can’t wait? Try it now!

Open your favorite browser and head over to the Eclipse download page. To access the Neon milestone release make sure to click on the “Developer Builds” as indicated by the orange arrow in the screenshot below.

If you should run any issues or just like to share your experience with the new Scout Neon release please let us know in the Scout forum.

Outlook

For the Neon June release we plan to include Scout tablet support for the HTML5 UI. The Scout mobile support will not make it in time for the Neon June release and will be included in one of the Neon follow up releases. As soon as we can commit on a schedule regarding mobile support we will of course share this information with you.

For the remaining time for the Neon release the Scout project is concentrating on making sure that the Scout Neon release is production ready for commercial applications by the official release date.

On parallel tasks we now upgrading the central parts of the Scout documentation and co-organizing the Eclipse Neon Democamps in Munich on June 20th and Zurich on June 21st. We are looking forward to meet you there.

Feedback? Please use this forum thread

Scout Links

Project Home | Forum | Wiki | Twitter | Google+ | Professional Support

vert.x project: Vertx 3 and Keycloak tutorial

$
0
0

With the upcoming release of Vert.x 3.3 securing your application with Keycloak is even easier than before.

About Keycloak

Keycloak describes itself as an Open Source Identity and Access Management For Modern Applications and Services.

With Keycloak you can quickly add Authentication and Authorization to your vert.x application. The easy way is to setup a realm on keycloak and once you’re done, export the configuration to your vert.x app.

This how you would secure your app:

  1. create a OAuth2Auth instance with OAuth2Auth.createKeycloak(...)
  2. copy your config from the keycloak admin GUI
  3. setup your callback according to what you entered on keycloak
  4. secure your resource with router.route("/protected/*").handler(oauth2)

Screencast

The following screencast explains how you can do this from scratch:

Don’t forget to follow our youtube channel!

Benjamin Cabe: Running Eclipse Che on a Raspberry Pi

$
0
0

Eclipse Che is a very cool Eclipse technology that provides you with a browser-based IDE that can be extended with plug-ins for virtually any language, framework, or tool that you may want to use in your day-to-day development.

This means that, right from your browser, you can do Java development and have Maven automatically build your stuff, or do Javascript development and still be able to easily integrate with e.g grunt to build your website.

As you may have guessed, most of the magic of Che is in its server. While in many cases you will run the Che server on your own laptop or private server, it’s also pretty cool to run it on an embedded/IoT device such as Raspberry Pi so as not only you have an “IDE-in-a-box” setup, but you can also actually devloper code targetting the Pi itself. And yes, that means blinking LEDs… and more ! ;-)

Install Docker

Assuming you are running an up-to-date Jessie distribution, it should be fairly straightforward to install the armhf version of docker provided by the Hypriot team.

cd ~/Downloads
wget https://downloads.hypriot.com/docker-hypriot_1.10.3-1_armhf.deb
sudo dpkg -i docker-hypriot_1.10.3-1_armhf.deb
sudo usermod -aG docker pi

At this point, you want to quickly logout and login again, in order for the addition of the user pi to the docker group to be properly applied. Then, we can test that docker is indeed running:

docker ps

This should grant you with an empty list of running docker containers. How surprising!? But at least it means you have Docker setup taken care of!

FWIW, the Hypriot folks have a Debian repo for making things easier. I have had problems with it though so you may want to stay away from it until they fix it?

Downloading Che

wget https://install.codenvycorp.com/che/eclipse-che-latest.zip
unzip eclipse-che-latest.zip
cd eclipse-che*

Updating Che’s built-in stacks to be ARM-compatible

When Che creates your development environment, it instantiates a Docker container that has the tools you need. That is to say, if you are to do Node development, Che can provision a so-called “stack” that contains npm, grunt, etc. The stacks configured by default in Che are based on x86 Docker images, so you will need to replace them with armfh-compatible ones.

sed -i 's/codenvy\/ubuntu_jdk8/kartben\/armhf-che-jdk8/g' stacks/predefined-stacks.json
sed -i 's/codenvy\/node/kartben\/armhf-che-node/g' stacks/predefined-stacks.json

I’ve built an image for Java and Node development, which means you’ll be able to use the “Java”, “Node”, and “Blank” ready-to-go stacks. Should you want to have a look at the Dockerfiles for those, see here.

Note that you don’t have to use the built-in stacks, and you can also create your on-the-fly, using a custom recipe. There as well, the base Docker image you’re building from will need to be ARMHF. You may want to use images from hypriot or armv7 on DockerHub.

Update other default settings

The Raspberry Pi 3 is quad-core, which means you actually get some very decent performances out of it. However, it’s still an embedded sort of device, and SD cards are typically not fast either. It’s a good idea to increase the timeout Che uses to detect a workspace is properly provisioned.

sed -i 's/machine.ws_agent.max_start_time_ms=60000/machine.ws_agent.max_start_time_ms=240000/g' conf/che.properties

Launch Che!

You’re good to go! All that is left is to launch Che. As you will likely be accessing it from e.g your Desktop computer, you need to make sure to use the -r:<external-IP> command-line argument to make sure it works properly from “non-localhost”:

./bin/che.sh run -r:192.168.2.26

That’s it! You can now use the Java and Node stacks, and start using your web browser to develop right on your Pi, with *all* the features you would expect from a “real” IDE. Enjoy, and stay tuned for a video tutorial soon.

David Green: A New Life for Green's Opinion

$
0
0

I had a lot of fun with Green's Opinion, but with a change in priorities blogging moved to the back-burner. Priorities have changed again, and once again I'm blogging - only this time I've moved to new site, greensopinion.com

I'm already back in the groove with a few new articles:

greensopinion.com

So be sure to bookmark the new site greensopinion.com where I will continue posting my thoughts on software, coding and being a software engineer.

Eclipse Announcements: Organize an Eclipse Neon DemoCamp or Hackathon!

$
0
0
It's time to plan an Eclipse Neon DempCamp or Hackathon in your city.
Viewing all 6595 articles
Browse latest View live