Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 93/1285 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • VG.net 8.5 Released

    - by Frank Hileman
    We have released version 8.5 of the VG.net vector graphics system. This release supports Visual Studio 2013. Companies who purchased a VG.net license after October 1, 2013, are eligible for a free upgrade. We will be sending you an email. There is one cosmetic problem which wasted our time, as we could not find a work around. It occurs when your display is set to a high DPI. You can see the problem in the image of the toolbox below, which uses a DPI of 125%, on Windows 7: The ToolboxItem class accepts only Bitmaps with a size of 16x16. We tried many sizes and many bitmap formats. As you can see, this tiny Bitmap is then scaled by the toolbox, and the scaling algorithm adds artifacts. This is an "improvement" Microsoft recently added to Visual Studio 2013.

    Read the article

  • AIOUG TechDay @ Lovely Professional University, Jalandhar, India

    - by Tori Wieldt
    by guest blogger Jitendra Chittoda, co-leader, Delhi and NCR JUG On 30 August 2013, Lovely Professional University (LPU) Jalandhar organized an All India Oracle User Group (AIOUG) TechDay event on Oracle and Java. This was a full day event with various sessions on J2EE 6, Java Concurrency, NoSQL, MongoDB, Oracle 12c, Oracle ADF etc. It was an overwhelming response from students, auditorium was jam packed with 600+ LPU energetic students of B.Tech and MCA stream. Navein Juneja Sr. Director LPU gave the keynote and introduced the speakers of AIOUG and Delhi & NCR Java User Group (DaNJUG). Mr. Juneja explained about the LPU and its students. He explained how Oracle and Java is most used and accepted technologies in world. Rohit Dhand Additional Dean LPU came on stage and share about how his career started with Oracle databases. He encouraged students to learn these technologies and build their career. Satyendra Kumar vice-president AIOUG thanked LPU and their stuff for organizing such a good technical event and students for their overwhelming response.  He talked about the India Oracle group and its events at various geographical locations all over India. Jitendra Chittoda Co-Leader DaNJUG explained how to make a new Java User Groups (JUG), what are its benefits and how to promote it. He explained how the Indian JUGs are contributing to the different initiatives like Adopt-a-JSR and Adopt-OpenJDK. After the inaugural address event started with two different tracks one for Oracle Database and another for Java and its related technologies. Speakers: Satyendra Kumar Pasalapudi (Co-founder and Vice President of AIOUG) Aman Sharma (Oracle Database Consultant and Instructor) Shekhar Gulati (OpenShift Developer Evangelist at RedHat) Rohan Walia (Oracle ADF Consultant at Oracle) Jitendra Chittoda (Co-leader Delhi & NCR JUG and Senior Developer at ION Trading)

    Read the article

  • New T-SQL Features in SQL Server 2011

    - by Divya Agrawal
    SQL Server 2011 (or Denali) CTP is now available and can be downloaded at http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6a04f16f-f6be-4f92-9c92-f7e5677d91f9&displaylang=en SQL Server 2011 has several major enhancements including a new look for SSMS. SSMS is now   similar to Visual Studio   with greatly improved Intellisense support. This article we will focus on the T-SQL Enhancements in SQL Server 2011. The main [...]

    Read the article

  • Working with Legacy code #4 : Remove the hard dependencies

    - by andrewstopford
    During your refactoring cycle you should be seeking out the hard dependencies that the code may have, examples of these can include. File System Database Network (HTTP) Application Server (Crystal) Classes that service these kind (or code that can be abstracted to a class) of these kind of dependencies should be wrapped in an interface for easier mocking. If you team starts refering to the interface version of these classes the hard dependency will over time work it's self free.

    Read the article

  • Content Encryption Options in Oracle IRM 11g

    - by martin.abrahams
    Another of the innovations in Oracle IRM 11g is a wider choice of encryption algorithms for protecting content. The choice is now as illustrated below. As you see, three of the choices are marked as FIPS options, where FIPS refers to the Federal Information Processing Standard Publication 140-2, a U.S. government security standard for accreditation of cryptographic modules.

    Read the article

  • Video: How To Prepare Delicious Onion Samosa

    - by Gopinath
    Ulli Samosa (Onion Samosa) is one of my favourite snacks I use to eat while in Andhra Pradesh. But after coming to Chennai, it’s very tough to find these Samosas in shops. Last weekend I was searching for almost 2 hours and could not find a single shop. The best way to enjoy this delicious Samosas in Chennai is to prepare them at home and thanks to this Vah Reh Vah for posting this video on YouTube with detailed instructions on preparation of Ulli Samosa. We will be trying out this recipe today evening. Update: We followed the instructions in the video and could not prepare the Somasa we were expecting. In this preparation method, stuff inside Samosa was raw and I did not like the taste . By the way, it’s just me who did not like it. If you are looking for detailed preparation instructions in text so that you can take printout for handy reference, check this Vah Reh Vah website This article titled,Video: How To Prepare Delicious Onion Samosa, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Cutting Paper through Visualization and Collaboration

    - by [email protected]
    It's hard not to hear about "Going Green" these days. Many are working to be more environmentally conscious in their personal lives, and this has extended to the corporate world as well. I know I'm always looking for new ways. Environmental responsibility is important at Oracle too, and we have an entire section of our website dedicated to our solutions around the Eco-Enterprise. You can check it out here: http://www.oracle.com/green/index.html Perhaps the biggest and most obvious challenge in the world of business is the fact that we use so much paper. There are many good reasons why we print today too. For example: Printing off a document, spreadsheet, or CAD design that will be reviewed and marked up while on a plane Having a printout of a facility when a field engineer performs on-site maintenance During a multi-party design review where a number of people will review a drawing in a meeting room, scribbling onto a large scale drawing print to provide their collaborative comments These are just a few potential use cases, and they're valid ones. However, there's a way in which you can turn these paper processes into digital ones. AutoVue allows you to view, mark-up, and collaborate on all the data you would print. Indeed, this is the core of what AutoVue does. So if we take the examples above, we could address each as follows: Allow you to view the document, spreadsheet, or CAD drawing in AutoVue on your laptop. Even if you originally had this data vaulted in some time of system of record (like an ECM solution) and view your data from there, AutoVue allows you to "Work Offline" and take the documents you need to review on your laptop. From there, the many annotation tools in AutoVue will give you what you need to comment upon the documents that you are reviewing. The challenge with the mobile workforce is always access to information. People who perform maintenance and repair operations often are in locations with little to no Internet connectivity. However, technology is coming to these people in the form of laptops, tablet PCs, and other portable devices too. AutoVue can address situations with limited bandwidth through our streaming technology for viewing, meaning that the most up to date information can be pulled up from the central server - without the need for large data transfer. When there is no connectivity at all, the "Work Offline" option will handle this. For a design review session, the Real-Time Collaboration capabilities of AutoVue will let all the participants view the same document in a synchronized view, allowing each person to be able to mark-up the document at the same time. Since this is done in a web-based manner, not only is it not necessary to print the document, but you benefit by reducing the travel needed for these sessions. Not only are trees spared, but jet fuel as well. There are many steps involved with "Going Green", but each step is a necessary one. What we do today will directly influence our future generations, and we're looking to help.

    Read the article

  • Is it okay to be generalist?

    - by Londoner
    I work at a ~50 employee company (UK), where all the technical people do a bit of everything. Specialising in anything for very long (6 months) is discouraged. For example, last week, I built a new Debian webserver, refactored some Perl, sat on a sales phone call, did a tape backup, reviewed code, built and deployed an RPM, gave opinions about x, y, z... With such a work scheme, I have gained a general knowledge how many things work, and pretty specific knowledge. I maybe program for 5 hours a week, despite officially being a developer. Does anyone else work like this, (or is this company unique)? Is it a problem to have skills developed in this way? (i.e. know a bit about everything in a certain domain, rather than know everything about say, one programming language?) Is it okay to be a generalist?

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • Global Indian Developer Summit (GIDS), JavaOne Moscow, Java Summit Chennai

    - by arungupta
    My whirlwind tour of Java EE and GlassFish starts next weekend and covers the following cities in the next 6 weeks: JavaOne and Oracle Develop, Moscow Global Indian Developer Summit, Bangalore Java Summit, Chennai JavaOne, Hyderabad OTN Developer Day, Pune OTN Developer Day, Istanbul Geecon, Poznan JEEConf, Kiev OTN Developer Day, Johannesburg Several other members of the team will be speaking at some of these events as well. Please feel free to reach out to any of us, ask a question, and share your passion. Here is the first set of conferences coming up: Date: Apr 17-18 Schedule My Schedule       Deploying your Java EE 6 Applications in Producion hands-on lab       Technical Keynote       Some other technical sessions Venue: Russian Academy of Sciences Register Connect: @OracleRU Date: April 17-20 Schedule (date decided, time slots TBD) My Schedule: NetBeans/Java EE 6 workshop on April 19th, Other sessions (as listed above) on April 20 Venue: J. N. Tata Auditorium, National Science Symposium Complex, Sir C. V. Raman Avenue, Bangalore, India Register Connect: @GreatIndianDev Date: April 21, 2011 Schedule My Schedule: Java EE 7 at 9:30am, JAX-RS 2.0 at 11am Venue: VELS University Register (FREE) Connect: @jug_c Where will I meet or run with you ? Do ask me to record a video session if you are using GlassFish and would like to share your story at blogs.oracle.com/stories.

    Read the article

  • New Whitepaper: Oracle E-Business Suite on Exadata

    - by Steven Chan
    Our Maximum Availability Architecture (MAA) team has quietly been amassing a formidable set of whitepapers about the Oracle Exadata Database Machine.  They're available here:MAA Best Practices - Exadata Database MachineIf you're one of the lucky ones with access to this hardware platform, you'll be pleased to hear that the MAA team has just published a new whitepaper with best practices for EBS environments:Oracle E-Business Suite on ExadataThis whitepaper covers the following topics:Getting to Exadata -- a high level overview of fresh installation on, and migration to, Exadata Database Machine with pointers to more detailed documentation High Availability and Disaster Recovery -- an overview of our MAA best practices with pointers to our detailed MAA Best Practices documentation Performance and Scalability -- best practices for running Oracle E-Business Suite on Exadata Database Machine based on our internal testing

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

  • So which null equals this null, that null? maybe this null, or is it this null?

    - by GrumpyOldDBA
    Tuning takes many routes and I get into some interesting situations and often make some exciting finds, see http://sqlblogcasts.com/blogs/grumpyolddba/archive/2010/05/17/just-when-you-thought-it-was-safe.aspx for an example. Today I encountered a multitude of Foreign Key constraints on a table, now FKs are often candidates for indexes and as none of the defined keys had an index it required a closer look. I view foreign key constraints as somewhat of a pain, excessive keys can cause excessive related...(read more)

    Read the article

  • Simple Netduino Go Tutorial Flashing RGB LEDs with a potentiometer

    - by Chris Hammond
    In case you missed the announcement on 4/4, the guys and Secret Labs, along with other members of the Netduino Community have come out with a new platform called Netduino Go . Head on over www.netduino.com for the introduction forum post . This post is how to quickly get up and running with your Netduino Go, based on Chris Walker’s getting started forum post , with some enhancements that I think will make it easier to get up and running, as Chris’ post unfortunately leaves a few things out. Hardware...(read more)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Samsung introduces smart watch. But without any smartness!

    - by Gopinath
    The era of mobile phone can be classified as before iPhone and after iPhone. When iPhone was introduced they were revolutionary, smart, awesome and technologically far advanced than any other phone available in the market. iPhone is the first true smartphone and a game changer. With the same goal in mind, Samsung tried revolutionizing watch industry by announcing Galaxy Gear watches. They branded them as a smart watch, hyped its release as if it’s going to revolutionize the way we use watches. But in reality there is nothing exciting about it. Instead it’s an expensive (300$) one, battery lasts less than a day, user interface is very very slow, has small memory capacity, not much of use if not connected to a Samsung phone!! With so many negative, how can it be a smart watch? Reviews on Galaxy Gear smart watch The Next Web Smartwatches are still too dumb The Verge Samsung’s Galaxy Gear isn’t really a smartwatch Gizmodo $300 is a lot for a souped up fitness tracker, and as far as the basic smartphone functions Galaxy Gear is capable of, those feel a little strange and counterintuitive ZD NET Samsung has left the door wide open with the Galaxy Gear, which looks both rushed and exorbitantly priced at the same time. Few Tweets on Galaxy Gear 1. Apple surprises with iPhone 2. Tablet rumors 3. Others rush crap to market 4. iPad 5. ‘iWatch’ Rumors 6. Others rush crap to market 7. ? — Matthew Panzarino (@panzer) September 4, 2013 Camera, phone, music… There’s one thing that the Gear doesn’t seem to have: a purpose | http://t.co/raK4Rgy6Fm by @SamGrobart — Businessweek (@BW) September 5, 2013   Galaxy Gear Video demo shows how slow it is Here is a Galaxy Gear hands on video from slashgear. You can see how slow the device performs   With the rumors of Apple building smart watch, Samsung rushed to the market with its own version of (smart) watch as a preemptive strike. But with a half baked product and without any innovation it back fired on it own reputation. This would’ve been great chance to Samsung to prove that the company is innovative and not a copy cat. But it failed to innovate and missed a chance.

    Read the article

  • Remembering September 11 - 11 Years Later

    - by user12613380
    It's September 11 again and time to reminisce about that fateful day when the world came together as one. The attacks of that day touched everyone around the world as almost 3000 people from the United States and 38 other countries were killed. This year, I am finding it difficult to say anything other than what I have said in previous years. So, I will not try to "wax loquacious." Instead, I will simply say that I will never forgot. I will not forget where I was on that day. I will not forgot the people who died. I will not forget the people who gave their lives so that others might live. And I will not forget how our world changed on that day. And with that remembrance, we again return to our lives, using tragedy to drive us to build a world of peace and opportunity. My thanks go out again to the men and women, uniformed or not, who continue to protect us from harm. May we never again experience such human tragedy, on U.S. soil or elsewhere.

    Read the article

  • Future Tech Duke

    - by Tori Wieldt
    Do you like the new Duke? Have you gotten the new Duke screensaver yet? Follow @java or Like I <3 Java on Facebook and get the latest 3D, animated "Future Tech Duke" screensaver.   If you haven't already, register now to watch the global July 7 Java 7 community celebration and learn more about Java moving forward. We are looking for questions from the community to be asked during the panel Q & A. Enter your questions as a comment here, or tweet it with #java7. There's lots of great content being created for Java 7: technical articles, videos, updated web pages (can you say "layer cake?"), T-shirts, presentations, and there will be lots of Java 7 content in the new Java Magazine. See you at the Java 7 celebration event! Duke will be there!

    Read the article

  • Provisioning Videos

    - by Owen Allen
    There are a couple of new videos up on the Oracle Learning Youtube channel about Ops Center's provisioning capabilities. Simon Hayler does a walkthrough of a couple of different procedures. The first video shows you how to provision Oracle Solaris zones. It explains how to create an Oracle Solaris Zone profile, and then how to apply it (using a deployment plan) to a target system. The second video shows you how to provision an x86 server with Oracle Solaris. This uses a very similar process - you create a OS provisioning profile, then use a deployment plan to apply it to the target hardware. The documentation goes over OS provisioning and zone creation in the Feature Guide, if you're looking for additional information.

    Read the article

  • Troubleshooting Windows Blue Screen Errors

    The so-called ‘Blue Screen of Death’ has inspired fear in the hearts of mere mortals, but Systems Administrators are expected be capable of casually beating back this sinister beast. So imagine Ben Lye’s distress when he discovered that many aspiring SysAdmins had no structured approach to tackling the root of the problem. Setting out to remedy the situation, Ben lays out a simple 3-step plan, and dispenses some good advice.

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • How to use the Netduino Go Piezo Buzzer Module

    - by Chris Hammond
    Originally posted on ChrisHammond.com Over the next couple of days people should be receiving their Netduino Go Piezo Buzzer Modules , at least if they have ordered them from Amazon. I was lucky enough to get mine very quickly from Amazon and put together a sample project the other night. This is by no means a complex project, and most of it is code from the public domain for projects based on the original Netduino. Project Overview So what does the project do? Essentially it plays 3 “tunes” that...(read more)

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >