Search Results

Search found 27181 results on 1088 pages for 'oracle desktop virtualization'.

Page 693/1088 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • Marek's JAX-RS 2.0 content from Devoxx 2011

    - by alexismp
    Marek Potociar, one of the two co-spec leads for the upcoming JAX-RS 2.0 had a very well-attended session at Devoxx and wrote a blog post about it detailing his conference experience (1st time at Devoxx) and running through the new features of the specification. A link to slides is also included in his post. The work by the expert group seems very solid at this point as you can read for yourself in details in the recently published early draft document. You can follow the remaining work between now and the middle of new year on the specification project pages on java.net.

    Read the article

  • Fix import hint

    - by Martin Janicek
    Good news everyone! I've implemented 'Fix import hint' which should make your life (and most probably also the groovy development) much easier! It looks in the same way as in Java editor, so you might choose between classes with the same name. Hope you will enjoy it! And as usual if you would like to try it on your own, download the latest development build and I will be more than happy for every feedback!

    Read the article

  • Ubuntu 12.04 LTS Realtek RTL8192E Problems

    - by Logan
    My wifi is not working. I have tried ndiswrapper with the XP driver but that still did not work. Please explain to me in layman's terms. I can run commands so if you tell me exactly what to put, i will put it. Basically, ubuntu is not showing a wireless networks option and when I ran some command yesterday it said the driver was UNCLAIMED. Please help. This is a wintec card with realtek rtl8192E chip on it. It is a desktop computer. On a 20gb partition, other 60gb is for windows(yeah its an old desktop). Use windows installer to install ubuntu(this is one of the dell desktops that wont boot ubuntu from disk). So any help is appreciated and I would like to get this done THIS weekend.

    Read the article

  • XAML RadControls Q1 2010 Official

    Q1 2010 release focuses on strengthening 3 main aspects of RadControls for Silverlight and RadControls for WPF: Ensuring first-class performance for all data-centric controls through various techniques, Enhancing and polishing RadControls themes Providing highly advanced, enterprise-level features, especially for the data visualization controls  We know that performance is crucial for line-of-business applications. Therefore, we always make sure that RadControls can help you achieve unmatched performance and this has always been our number one priority. RadControls achieve unbeatable performance through UI and Data Virtualization, Data Sampling and built-in Load On Demand features. Several of the major controls in the bundles have been enhanced with UI virtualization support Scheduler, CoverFlow and Book. As a part of Q1 2010 we also want to bring an unparalleled visual richness to your applications. To achieve that we have done a major rework of all our themes. We used a uniform templating approach across all controls, streamlined naming conventions for resources and delivered a much more consistent look of the controls along the way. RadControls for WPF bundle has been enriched with two new controls Map and Book.   Another new control has been included in the Q1 2010 release. However, it continues to be in a CTP stage. This is the Transition control. We decided that this is the better way to proceed as we will need some more input from our community on how exactly to develop this control further. Therefore, we will be regularly blogging on the development progress so that we can clearly indicate the direction, in which the control is evolving and gather your feedback on whether this is the best direction. Our Charting controls for Silverilght and WPF have been advanced with major new features such as Data Sampling, Zooming and Scrolling, Automatic SmartLables positioning, Sorting and Filtering and many more. The new built-in paging of the GridView control now allows you to page through your data, thus resulting in an event faster and more responsive grid that can easily handle enormously large datasets. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CRT as 2nd monitor goes screwy after start up?

    - by rhys
    new install of 12.04 on an old dell with a radeon ATI RV516 video card with monitor out and s-video out. During boot up all is good. Both screens operate and look fine. Then just before the desktop appears the cCRT goes purple and is covered in heavy horizontal lines, but as i said during boot up it was fine and the resolution was fine. The main monitor, an lcd, operates normally. everything else works fine, it's just the picture on the CRT that is screwed up. I used the same monitor and CRT running 11.10 which worked fine any help would be appreciated and yes i am a newbie to ubuntu here is a vid showing the completely normal screens at reboot then the purple badness when the desktop loads ?? and don't laugh at the slow machine, it's old. http://www.youtube.com/my_videos_edit?ns=1&video_id=zfuh6lBMLnc

    Read the article

  • Will we ever lose the human touch?

    - by divya.malik
    I was at a conference two weeks ago, which was targeted to sales and marketing professionals. The discussions around the changing scenario in sales was very interesting. More and more of selling is moving to the internet- sales people are delivering more of their presentations online, or via the phone. Budget constraints and new technologies have dramatically decreased the need for face-to-face interactions. At the same time, customers are also researching for products on their own, taking the advice of peers, making up their mind, and then contacting the vendor. That takes care of more than half of the usual selling process. But humans are social animals, and because of that I believe that despite these changing trends and technologies, the need to maintain the human touch will always be necessary. One of the presenters at the conference shared this video, which stayed in my mind.

    Read the article

  • What's new with Java technology? Java Embedded

    - by hinkmond
    As this article points out, Java Embedded is a safer, more robust and easier to develop platform for small networked devices. So, get ready for good things to come from Java Embedded... See: Java Embedded: Next New Thing Here's a quote: Through the past few years the industry as we know it has seen a big boom with the mobile and cloud revolution. Today, there has been an enormous amount of buzz around machine to machine (M2M) or the "Internet of Things," since we are moving into a state where everything is going to have to be interconnected and will have to properly communicate together... Today, Java Embedded provides that platform. I like it! As long as there's no Zombie Apocalypse, I think Java Embedded has a great future! Hinkmond

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Not able to install ubuntu 12.10

    - by Janet
    When I try to install Ubuntu on my computer I get this error: /var/cache/apt/archives/compiz-gnome_1% 3a0.9.8.4+brz 3412-0ubuntu0.1_i386.deb and /var/cache/apt/archives/metacity-common_1% 3a2.34.8-0ubuntu4_all.deb Ok, since I'm not a whiz at installing linux or understanding it...When these two things popped up it stated that the file was corrupted and there were too many errors to complete the install. Now, does that help? I had Ubuntu 12.04 LTS on my computer and wanted to upgrade, now I have nothing on my desktop and when I tried to install by using my usb pen, nothing happen and I also have it on dvd and tried to install from that and nothing still happen. So maybe someone can tell me why it's not installing on my desktop? I have it on my laptop with Windows.

    Read the article

  • Meet up with the JCP at JavaOne Latin America

    - by Heather VanCura
    The JCP made it to JavaOne Brazil!  We had a quickie presentation earlier today on JCP.Next that was well attended.  Come to see us at@ the  OTN mini-theatre tomorrow from 12:00-12:15 pm for a quickie on participation.  Then make your way to the Mazanino Sala 12 at 12:30 pm for CON-22250.  "The Java Community Process: How You Can Make a Positive Difference" will be presented with Heather VanCura, JCP,  and Fabio Velloso, SouJava, on Thursday, 6 December, at 12:30 pm.  Find out more about how to participate in the JCP program, the JCP.Next effort and how to get involved with Adopt-a-JSR through your JUG (or on your own)!  Here is the description in Portuguese: A JCP desempenha um papel fundamental na evolução do Java. A sessão vai enfatizar o valor da transparência e participação através da JCP, Grupos de Usuários Java e do programa Adote um JSR. Vamos explorar também algumas das mudanças futuras no processo através da iniciativa JCP.Next, e explicar como você pode se envolver. Traga suas dúvidas, suas sugestões, e suas preocupações. Nós queremos ouvir de você, e incentivá-lo e facilitar a sua participação ativa no avanço da plataforma Java

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • Update Your NetBeans Plugin's "Supported NetBeans Versions" In The Next Two Weeks!

    - by Geertjan
    For each NetBeans plugin uploaded to the NetBeans Plugin Portal, the registration page starts like this: Note how the "Supported NetBeans Versions" field is empty, i.e., no checkbox is checked, for the plugin above. As you can also see, there is a red asterisk next to this field, which means it is mandatory. It is mandatory for the latest version of the NetBeans Plugin Portal, while it wasn't mandatory before, so that several plugins were registered without their supported version being set. Therefore, since the version is now mandatory, anyone who doesn't want their plugin to be hidden for the rest of this year, and removed on 1 January 2013 if no one complains about their absence, needs to go to their plugin's registration page and set a NetBeans Version. E-mails have been sent to plugin developers of unversioned plugins already, over the last weeks. Currently there are 91 plugins that still need to have their NetBeans Version set. Probably at least 1/3 of those are my own plugins, so this is as much a reminder to myself as anyone else! Whether or not you have received an e-mail asking you to set a NetBeans Version for your plugins, please take a quick look anyway and maybe this is a good opportunity to update other information relating to your plugin. You (and I) have two weeks: on Monday 16 April, any NetBeans plugin in the Plugin Portal without a NetBeans Version will be hidden. And then removed, at the start of next year, if no one complains.

    Read the article

  • Is your company thinking of transitioning from java to another technology?

    - by Augusto
    As every Java developer knows, Oracle bought Sun and the future of java looks quite unclear, specially since Oracle wants to monetize the JVM. Java as a language has also been stale in the last few years, the non-inclusion of closures is one example (which might be included in java 1.8) At the same time, some new technologies such as Ruby, Scala and Groovy are being used to deliver complex sites. I'm wondering if there are companies or organizations which are talking, doing spikes or starting to use a different technology, with the idea to stop using java for green field projects, in the same way that 15 years ago companies migrated form C++, perl and other technologies to Java. I'm also interested to know what are the impressions of this happening, for example: planning to migrate to a different technology in 2 years. To be clear, I'm not asking which technology is better. I'm asking if your organization is thinking to leave Java for another technology.

    Read the article

  • Geek Bike Ride JavaOne 2012

    - by Tori Wieldt
    "Geek Bike Ride?" the clerk at the bike rental shop asked. "Are you guys all from the same company?" "We aren't even from the same country!" we answered. "I'm from Russia." "We're from Germany."  "I'm from Belgium." "I'm from Palo Alto." "I'm from Japan."  "We're from Brazil." "We're from Brazil." "I'm from Sweden." "Coooool" was all she could say. She was right. The Geek Bike Ride was cooool. We had 39 bike riders and one skater show up Saturday for a great route from San Francisco's Fisherman's Wharf, across the Golden Gate bridge, to Saulsalito, and back to the city by ferry. Duke Bike jerseys, sponsored by OTN, were given out. To make sure Java developers got them, each person had to answer a Java question to get a jersey. The questions were really hard, like "Who is the Father of Java?" "What's the biggest Java conference in San Francisco?" The best was when the question was "Name one of Duke's Choice Award winner from this year," and Régina ten Bruggencate answered answered "Me!"  It was foggy throughout the day, with the sun poking out occasionally. The fog was thickest on the bridge, more that one rider commented that we were "in the cloud." It was a great day to meet new friends, and have a chat with old friends. We all had fun, though some of us may more a little more slowly during JavaOne. Ride on!  Photos by permission by Arun Gupta and Yoshio Terada. Thanks, guys!

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • Evaluating and Investigating Drug Safety Signals with Public Databases Webinar

    - by Roxana Babiciu
    In this one-hour webinar, BioPharm Systems' Dr. Rodney Lemery, vice president of safety and pharmacovigilance, will review a number of public databases available to use during the evaluation and investigation of identified safety signals. The discussion will focus on the use of free and paid longitudinal healthcare databases available online. After attending this presentation, you will better understand how these data sources can be used in your daily PV work. Read more here

    Read the article

  • Jersey 1.8 is released

    - by Jakub Podlesak
    On the last Friday, we have released the 1.8 version of Jersey, the open source, production quality, reference implementation of JAX-RS. The JAX-RS 1.1 specification is available at the JCP web site and also available in non-normative HTML here. For an overview of JAX-RS features read the Jersey user guide. To get started with Jersey read the getting started section of that guide. To understand more about what Jersey depends on read the dependencies section of that guide. See change log here. This, 1.8, version of Jersey is going to be integrated into GlassFish 3.1.1 and contains bug fixes mainly. The most important fix from this perspective is included in the JAX-RS/EJB integration layer. It is now possible to implement JAX-RS resources as EJB Session beans, which implement local and/or remote interfaces. This functionality was broken in previous releases. Another great addition should come into the client space, where Pavel has already done some preparation in the client API (including some breaking changes there) for the non-blocking asynchronous client feature. The implementation is already part of the experimental Jersey space and should be included as part of the stable Jersey bits in some of the coming releases. For feedback send email to: [email protected] (archived here) or log bugs/features here.

    Read the article

  • Mix metrics for June 14, 2010

    - by tim.bonnemann
    We've been busy working on a few improvements to Mix which we plan to roll out over the coming weeks. In the meantime, here are our latest community metrics once again: Registered Mix users (weekly growth) 64,769 (+0.9%) Active users (percent of total) Last 30 days: 4,682 (7.2%) Last 60 days: 8,251 (12.7%) Last 90 days: 11,936 (18.4%) Traffic (30-day) Visits: 13,674 Page views: 77,808 Twitter Followers: 3,451 List mentions: 205 User-generated content (30-day) New ideas: 29 New questions: 38 New comments: 167 Groups There are currently 1,440 Mix groups (requires login).

    Read the article

  • OBIEE 11g recommended patch sets

    - by THE
     Martin has busied himself to combine the recommended patch sets for OBIEE 11g into one single useful KM note.(This one contains the recommendations for 11.1.1.5 as well as those for 11.1.1.6) OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1) So if you are looking for update/patch information for your OBIEE installation - this is most likely a useful stop. And as patching is an ongoing process you may want to bookmark this KM doc, as I am sure Martin will keep this current as new patches come out. Oh - and if you are looking for upgrade information from 11.1.1.5 to 11.1.1.6, KM Doc ID 1434253.1 might just be the thing you are looking for.

    Read the article

  • Upcoming Enhancements in AngularJS Integration in NetBeans IDE

    - by Geertjan
    New bleeding edge enhancements in AngularJS support in NetBeans IDE enable many more controllers to be found than in NetBeans IDE 7.4. The next version of NetBeans IDE parses all JavaScript files and checks for defined AngularJS controllers, such as the below: All recognized AngularJS controllers are offered in code completion, as shown below. In other words, code completion works better in finding AngularJS controllers. Another improvement is in the "Go To Declaration" feature. When you click Ctrl+Mouse over the name of a controller inside an NG-controller directive, you will be navigated to the related controller declaration. More accurate results can be shown in code completion mainly because there are changes in the generation of JavaScript virtual sources in an AngularJS page.

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >