Search Results

Search found 24122 results on 965 pages for 'programming tools'.

Page 335/965 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Installing Java ME SDK Plugin for NetBeans is now much easier!

    - by SungmoonCho
    The other day, I wrote about how to download and install Java ME SDK plugin for NetBeans. If you are using NetBeans 7.2.1 or later, you don't have to go through the whole process at all. It's now a matter of a few clicks, because all the plugins are now all in NetBeans update server. Here is a new way to install and integrate Java ME SDK plugins for NetBeans. 1. On NetBeans, go to "Tools"- "Plugins". 2. Click on "Available Plugins" tab. Locate "Java ME SDK Tools". 3. Check the tools you want to install, and click "Install" button at the bottom left corner. 4. NetBeans will restart, and that's it! Remember that different Java ME SDK requires different version of Plugins. If you are using Java ME SDK 3.0.5 or earlier, you must install Java ME SDK Tool version 2.0 (works with NetBeans 7.1.2 or earlier) If you are using Java ME SDK 3.2 or later, you must install Java ME SDK Tool version 3.0 (works with NetBeans 7.2 or later)

    Read the article

  • Book / software to learn ERP?

    - by user22311
    For a while I've been wanting to learn ERP. What I would like to do is set up a system, and then practice running a business doing things like generating invoices, raising purchase orders, producing monthly accounts, keeping track of fixed assets etc. Then look at enhancing the system further by adding custom code. I know there are a variety of open source ERP systems around, but really I would prefer a widely used commercial system as having familiarity with such a system would be more marketable. So are there any such systems that have a free developer version available? I've looked around, but I've had trouble finding such a system. Also I would like to find a good book or more likely books to read on ERP. Ideally I would be able to get an indepth explanation ERP from a business perspective, ie the accounting operations it supports, how to do all the regular accounting tasks, configuration, business operations best practices, accounting controls and so on. In addition I would like an IT perspective, ie setting up the system, developing forms and reports. Unfortunately the few books I've looked at have been really superficial and totally inadequate. They either fall into the beginning programmer camp, where they use the programming tools but concentrate on programming 101 topics like loops, flow control etc. Or they cover setting up the system with lots of screen shots, but little substance as to why things should be done a certain way. I have a programming background but no real experience in implementing ERP systems. Also I have a reasonable accounting knowledge, and have used ERP systems in various jobs, but only to a very limited degree. So are there any ERP experts who can point me in the right direction? Thanks.

    Read the article

  • Not able to install LTT tool in 10.04

    - by Ashoka
    I have tried the below commands to install LTT on VMware running Ubuntu 10.04, but got the following error. Please help. From link : https://launchpad.net/~lttng/+archive/ppa $ sudo apt-add-repository ppa:lttng/ppa $ sudo apt-get update $ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace user@usr:~$ sudo apt-add-repository ppa:lttng/ppa Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C541B13BD43FA44A287E4161F4A7DFFC33739778 gpg: requesting key 33739778 from hkp server keyserver.ubuntu.com gpg: unable to execute program `/usr/local/libexec/gnupg/gpgkeys_curl': No such file or directory gpg: no handler for keyserver scheme `hkp' gpg: keyserver receive failed: keyserver error user@usr:~$ user@usr:~$ sudo apt-get update ..... $ user@usr:~$ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package lttng-tools user@usr:~$ My systemm details: user@usr:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" user@usr:~$ user@usr:~$ user@usr:~$ uname -a Linux usr 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux user@usr:~$ Regards, Ashoka

    Read the article

  • Problem with understanding how to start

    - by Coolface
    Okay, this might be a little off-topic but i try anyway. Sorry to bother. So i'm working as sysadmin for at least 5 years now and i quite enjoy IT field in general. Somehow i was never interested in programming much but always want to learn something at least easy and for personal usage. As sysadmin i need scripting skills so learn shell scripting without much problems, i also try to learn pascal, delphi, basic over time and must recent was python. Well, my problem is when i try to learn programming i just can't apply what i learn from the books to the real word. What i mean is i understand there are data structures, algorithms, variables, lib's, if-then logic, etc. but i just can't understand how to apply this things when i want to do real things. Like i want to get a something simple as parse web page, i draw a quick algorithm like get a web page, find a word on it and write a to file, on the paper everything look simple but when i get to the coding i just stuck pretty much from the start. I try read code of the real programs that just totally confusing especially big parts with many classes so i'm just quickly lost a trail what this code do. I think i just lack some fundamentals to see a big picture but don't really know what this might be? Or maybe i just don't have a passion to programming at all... My best bet was a shell scripting so i have really no problems to write complex scripts but this just not enough. Recently i was read around 5 or 6 python books because everyone say it's so easy even kid can code something but still no much luck, python is good and easy but i can't make something harder then a prodecurial style code like in bash for easy things but when i want harder things i'm still stuck. In college i was also not a math and tech guy and like to study non-tech stuff mostly like economy, psychology maybe that my problem? Anyway any advice would be greatly appriciated.

    Read the article

  • How do I tell my parents that landing a job is what actually counts?

    - by shovonr
    On one side, I just want to get a degree with a 3.0 GPA. On the other side, my parents want more than just a 3. Now here's the thing. I program with a passion. I spend day and night programming. And I ace all my programming courses. However, I do terrible on all my elective courses -- such as writing, history, and all that stuff -- which only leaves me with a 3.1 to 3.2 GPA. And my parents want more. They think that university is like high school, where you need super-stellar grades to get to the next level. But they don't realize that good enough grades will land me a job. And they don't realize that a programmer needs to practice to become good at programming, and that having good skills is what will land a job in a nice software development company. Thankfully, though, they don't threaten to beat me with a baseball bat or anything like that. They just occasionally give me the little "tsk-tsk". But even that little "tsk-tsk" makes me feel guilty for opening up an IDE. And on top of that, I procrastinate because of that feeling of guilt. So now, I want to come clean with them. I want to know what's a good way to do that. [Edit] OK, so now, I realized, I should aim for higher grades, as some have suggested below.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Google App Engine 1.3.1 <admin-console> Issue

    - by Taylor L
    I attempted to add an <admin-console> section to my appengine-web.xml and I got the exception below. The <admin-console> element is a valid element according to the appengine-web.xsd. It's also documented in the app engine docs. Any ideas as to what is wrong? <admin-console> <page name="My Admin" url="/app/admin" /> </admin-console> Feb 14, 2010 12:40:09 AM com.google.apphosting.utils.config.AppEngineWebXmlReader readAppEngineWebXml SEVERE: Received exception processing C:/development/taylor/myapp/target/myapp-web-0.0.1-SNAPSHOT\WEB-INF/appengine-web.xml com.google.apphosting.utils.config.AppEngineConfigException: Unrecognized element <admin-console> at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processSecondLevelNode(AppEngineWebXmlProcessor.java:99) at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processXml(AppEngineWebXmlProcessor.java:46) at com.google.apphosting.utils.config.AppEngineWebXmlReader.processXml(AppEngineWebXmlReader.java:94) at com.google.apphosting.utils.config.AppEngineWebXmlReader.readAppEngineWebXml(AppEngineWebXmlReader.java:61) at com.google.appengine.tools.admin.Application.<init>(Application.java:88) at com.google.appengine.tools.admin.Application.readApplication(Application.java:120) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:107) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:58) at com.google.appengine.tools.admin.AppCfg.main(AppCfg.java:54) at net.kindleit.gae.EngineGoalBase.runAppCfg(EngineGoalBase.java:140) at net.kindleit.gae.DeployGoal.execute(DeployGoal.java:38) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:579) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:498) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmentForProject(DefaultLifecycleExecutor.java:265) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:191) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:149) at org.apache.maven.DefaultMaven.execute_aroundBody0(DefaultMaven.java:223) at org.apache.maven.DefaultMaven.execute_aroundBody1$advice(DefaultMaven.java:304) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:1) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody2(MavenEmbedder.java:904) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody3$advice(MavenEmbedder.java:304) at org.apache.maven.embedder.MavenEmbedder.execute(MavenEmbedder.java:1) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:176) at org.apache.maven.cli.MavenCli.main(MavenCli.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:408) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:351) at org.codehaus.classworlds.Launcher.main(Launcher.java:31)

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • What would you suggest as a high school first language?

    - by ldigas
    Edit by OA: After reading some answers I'll just update the question a little. At first I put it a little bluntly, but some of those gave me some good arguments which have to be taken into consideration while making a stand on this one. (these are mostly picked up from comments and answers below). A few things to take into account: to many pupils this is a first programming language - at this stage most of them have trouble grasping a difference between data types, variable passing, ... and whatnot, less alone pointers and similar 'low level stuff' :) they will all have to pass this to get into next grade (well, big majority of them anyway) not all of them have computers at home, not all of them are willing to learn this, less alone interested in - so the concepts have to be taught on a finite time scale in school hours (as well as practice on computers) free literature is a bonus - the teacher will make some scripts and handaways, but still ... I wouldn't like to bear the parents with the burden of buying expensive literature (also, english is not a native language here ... and although they are all learning it, their ability to read it fluently is somewhat questionable) somebody gave an argument - "a language which does not get in the way of ideas" - good one accessibility on different platforms in not expecially important at this point - although most of the suggested ones are available on win as well as linux - not many macs in this part of europe (their prices are sky high for anything but specialised usage) I will check what are the licencing issues on ms express editions about using it massively in high schools for purposes like this - if someone has any info about this, please, do not be shy with it :) A friend of mine, informatics teacher - in EU it comes as something as junior cs teacher, in a local high school asked me what I thought about what should be the first language pupils should be taught? It is a technical school (a little more oriented towards mathematics than the gymnasium, but not computer oriented totally). So I'm asking you - what do you think should be the first language pupils are exposed to in highschool? They have been teaching Pascal so far, but she's not sure that's a good course. She thought about switching to C (which I resented; considering not all pupils have interests in programming, to start with, and should be taught something higher level since they are just gripping the idea of a loop and such ... for a start), I suggested python or ruby (preferably py since it handles all paradigms). What is your opinion on this one? I looked, but didn't find a similar question on SO, so if there is one, please just point me towards it. Edit: The assumption is that none of the pupils have been exposed to any programming in junior school. See also: What is the best way to teach young kids some basic programming concepts? Best ways to teach a beginner to program How and when do you teach a kid to code What is the easiest language to start with? High School Programming

    Read the article

  • SQL Server 2008 R2 Enterprise won't install on Windows 2008 R2 Enterprise

    - by Carlos Paulino
    I've been trying to install SQL Server on a new Windows Server 2008. I have tried everything but I haven't been able to narrow down the problem. When the installation fails I get " Exit code (Decimal): -2068643839". The problem with this is that according to Microsoft this is a generic error code. I follow their guide to look into the detail.txt inside C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\ But I can't find something that specifies the exact error. Any suggestions ? Thanks in advanced. I uploaded to detail.txt to http://www.megaupload.com/?d=0MV46SZH because it is to big to paste here. Below is the summary.txt ---------- Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2011-02-28 11:29:56 End time: 2011-02-28 11:34:45 Requested action: Install Machine Properties: Machine name: SA-SERVER Machine processor count: 8 OS version: Windows Server 2008 R2 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: F:\x64\setup\ Installation edition: ENTERPRISE User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\SYSTEM AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Manual ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: <empty> ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: <empty> ASSVCPASSWORD: ***** ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini CUSOURCE: ENABLERANU: False ENU: True ERRORREPORTING: False FARMACCOUNT: <empty> FARMADMINPORT: 0 FARMPASSWORD: ***** FEATURES: SQLENGINE,BIDS,CONN,IS,BC,SDK,SSMS,ADV_SSMS,SNAC_SDK,OCS FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: ***** HELP: False IACCEPTSQLSERVERLICENSETERMS: False INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: D:\SQLServer INSTANCEID: MSSQLSERVER INSTANCENAME: MSSQLSERVER ISSVCACCOUNT: NT AUTHORITY\SYSTEM ISSVCPASSWORD: ***** ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: ***** PCUSOURCE: PID: ***** QUIET: False QUIETSIMPLE: False ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: ***** RSSVCSTARTUPTYPE: Automatic SAPWD: ***** SECURITYMODE: SQL SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT AUTHORITY\SYSTEM SQLSVCPASSWORD: ***** SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SA-SERVER\Administrator SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: False TCPENABLED: 1 UIMODE: Normal X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Integration Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Connectivity Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Complete Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Backwards Compatibility Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Business Intelligence Development Studio Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Microsoft Sync Framework Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\SystemConfigurationCheck_Report.htm

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Ubuntu Control Center Makes Using Ubuntu Easier

    - by Vivek
    Users who are new to Ubuntu might find it somewhat difficult to configure. Today we take a look at using Ubuntu Control Center which makes managing different aspects of the system easier. About Ubuntu Control Center A lot of utilities and software has been written to work with Ubuntu. Ubuntu Control Center is one such cool utility which makes it easy for configuring Ubuntu. The following is a brief description of Ubuntu Control Center: Ubuntu Control Center or UCC is an application inspired by Mandriva Control Center and aims to centralize and organize in a simple and intuitive form the main configuration tools for Ubuntu distribution. UCC uses all the native applications already bundled with Ubuntu, but it also utilize some third-party apps like “Hardinfo”, “Boot-up Manager”, “GuFW” and “Font-Manager”. Ubuntu Control Center Here we look at installation and use of Ubuntu Control Center in Ubuntu 10.04. First we have to satisfy some dependencies. You will need to install Font-Manager and jstest-gtk (link below)…before installing Ubuntu Control Center (UCC). Click the Install Package button. You’ll be prompted to enter in your admin password for each installation package. Installation is successful…close out of the screen. Download and install Font-Manager…again you’ll need to enter in your password to complete installation.   Once you have installed the two dependencies, you are all set to install Ubuntu Control Center (link below), double click the downloaded Ubuntu Control Center deb file to install it. Once installed you can find it under Applications \ System Tools \ UCC. Once you launch it you can start managing your system, software, hardware, and more.   You can easily control various aspects of your Ubuntu System using Ubuntu Control Center. Here we look at configuring the firewall under Network and Internet.     UCC allows easy access for configuring several aspects of your system. Once you install UCC you’ll see how easy it is to configure your Ubuntu system through an intuitive clean graphical interface. If you’re new to Ubuntu, using UCC can help you in setting up your system how you like in a user friendly way. Home Page of UCC http://code.google.com/p/ucc/ Links Download Font-Manager ManagerDownload jstest-gtkUbuntu Control Center (UCC) Similar Articles Productive Geek Tips Adding extra Repositories on UbuntuAllow Remote Control To Your Desktop On UbuntuAssign a Hotkey to Open a Terminal Window in UbuntuInstall VMware Tools on Ubuntu Edgy EftInstall Monodevelop on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Visual Studio &amp; TFS &ndash; List of addins, extensions, patches and hotfixes &ndash; Latest and Greatest

    - by terje
    This post is a list of the addins and extensions we (I ) recommend for use in Inmeta.  It’s coming up all the time – what to install, where are the download sites, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 2010 connected to a Team Foundation Server 2010.  The edition of Visual Studio I use is the Ultimate Edition, but as many stay with the Premium Edition I’ve marked the extensions which only works with the Ultimate with a . I’ve also split the group into Recommended (which means Required) and Optional (which means Recommended) and Nice to Have (which means Optional) .   The focus is to get a setup which can be used for a complete coding experience for the whole ALM process.  The Code Gallery is found either through the Tools/Extension Manager menu in Visual Studio or through this link. The ones to really download is the Recommended category.  Then consider the Optional based on your needs.  The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column.     Product Available on Code Gallery Latest Version License Rec/Opt/N2H Applicable to Microsoft TFS Power Tools Sept 2010 Complete setup msi on link, split into parts on CG Sept 2010 Free Recommended TFS integration Yes Productivity Power Tools Yes 10.0.11019.3 Free Recommended Coding Yes Code Contracts No 1.4.30903 Free Recommended Coding & Quality Yes Code Contracts Editor Extensions Yes 1.4.30903 Free Recommended Coding & Quality Yes VSCommands Yes 3.6.4.1 Lite version Free (Good enough) Nice to have Coding No Power Commands Yes 1.0.2.3 Free Recommended Coding Yes FeaturePack 2   No.  MSDN Subscriber download under Visual Studio 2010 FP2 Part of MSDN Subscription Recommended Modeling & Testing Yes ReSharper No (Trial only) 5.1.1 Licensed Recommended Coding & Quality No dotTrace No 4.0.1 Licensed Optional Quality No NDepends No (Trial only) Licensed Optional Quality No tangible T4 editor Yes 1.950 Lite version Free (Good enough) Optional Coding (T4 templates) No Reflector No (Trial of Pro version only) 6.5 Lite version Free (Good enough) Recommended Coding/Investigation No LinqPad No 4.26.2 Licensed Nice to have Coding No Beyond Compare No 3.1.11 Licensed Recommended Coding/Investigation No Pex and Moles No (Moles available alone on CG) . Complete on MSDN Subscriber download under Visual Studio 2010 0.94.51023 Part of MSDN Subscription Optional Coding & Unit Testing Yes ApexSQL No Licensed Nice to have SQL No                 Some important Patches, upgrades and fixes Product Date Information Rec/Opt Applicable to Scrolling context menu KB2345133 and KB2413613 October 2010 Here Recommended Visual Studio MTM Patch October 2010 Here and here  KB2387011 Recommended (if you use MTM) MTM Data warehouse fix June 2010 Iteration dates fails with SQL 2008 R2.  KB2222312. Affects Burndown chart in Agile workbook Only for SQL 2008 R2 Server Upgrade 2008 to 2010 issue and hotfix August 2010 Fixes problems with labels and branches which are lost during upgrade. Apply before upgrade. Note: This has been fixed in the latest re-release of the TFS Server dated Aug 5th 2010. See here. Recommends downloading the latest bits. Only if upgrade from 2008 from earlier bits Server

    Read the article

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by [email protected]
    By Brian Dayton on April 22, 2010 2:02 PM I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: - At Project Kick-off: Meeting and understanding team member's histories, goals & roles - Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: - My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home - One of my co-workers started conference calls at 6:30 this morning - My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future. I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: - I shared an office with one of them - I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Fusion CRM ISV program is gaining weight: Examples of certified add-on's

    - by Richard Lefebvre
    The Fusion CRM ISV program is gaining traction. Please find below few examples of the partners having certified their add-on's to seamlessly work on top of Oracle Fusion CRM. For more information, please contact [email protected] ·         Opportunity-to-Quote.  Big Machines now integrates seamlessly to Oracle Fusion CRM, enabling customers with complex products and services and multiple sales channels to streamline the entire opportunity-to-quote process, including product selection, configuration, pricing, quoting, and approval workflows.  Create a custom hyperlink in the Opportunity to invoke Big Machines CPQ application to create a quote and sync up with the Fusion CRM custom quote object using the CRUD operations. The quote can be updated using the custom button in the custom tab in the opportunity details. See: http://www.bigmachines.com/oracle.php  ·         SaaS Billing and Subscription Management.  Is your prospect/customer asking whether top billing partners support Fusion CRM?  Positioning an integrated CRM solution for billing usage and subscription based services?  Need to implement a billable solution on the Oracle Java Cloud Service?  Aria Systems and Zuora have recently engaged with Oracle to deepen their integrations to Fusion CRM and team with Oracle for joint opportunities.  ·         Google Apps, SharePoint, Email-CRM Integrations o   Do your prospects use Google Apps in their business operations?  A “Best of AppExchange” award winner recently completed their integration for Fusion CRM.  CirrusInsight plugs Fusion CRM web services directly into Gmail, allowing you to search existing opportunity or contact, provide account information, and create an interaction such as phone call, appointment, or email against a customer or contact in Fusion CRM directly from Gmail.  o   An EMEA / France based partner, Aryvart provides bi-directional synchronization of appointments and tasks between Google calendar and Oracle Fusion CRM. For customers, it means adopting Oracle Fusion CRM while continuing to use Google calendar for appointments. o   Looking to lower the barrier and expand in SharePoint accounts?  InFact Group (EMEA / France & Germany) provides Microsoft SharePoint Connector for Oracle Fusion CRM. With this solution, you can store documents attached to an opportunity, into Microsoft SharePoint repository. For customers, it means adopting Oracle Fusion CRM while continuing to collaborate across existing content management infrastructure. o   Need to connect to MacMail, GroupWise, or Outlook/Exchange?  Omni Technology is a partner whose Riva CRM Integration recently engaged for support Fusion CRM as a key platform. Migration Tools from competitive CRMs, to Oracle Fusion CRM.  Data Migration Tools from legacy CRMs, to Oracle Fusion CRM.  A partner with the tools and techniques to speed adoption, Conemis provides data integration tools to export data from legacy CRM, and import into Oracle Fusion CRM via WebServices APIs. For customers, it means reducing cost of data migration from legacy CRM system into Oracle Fusion CRM. 

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published!

    - by Jakob Ehn
    During the summer and fall this year, me and my colleague Terje Sandstrøm has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. You can find it at http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                          The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working. It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Check it out and please let us know what you think of the book! Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • A SharePoint Developer&rsquo;s Toolchest

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When we develop for SharePoint, we end up using many tools, third party or Microsoft, to facilitate our development. What are some of your favorite tools? Mine are as below - 1. Reflector: When I saw reflector, I was pretty convinced that a tool better and more useful than it doesn’t exist. Well I was wrong! Redgate took over reflector and they still offer it as a free version, but they have a paid version called reflector pro. It lets you debug third party source code, as if you had the source code. Brilliant! Who needs documentation anymore when you have real code? 2. ULS Viewer: It is no secret, reading ULS logs is a pain in the rear. Well, not so with ULS Viewer, which does work with SharePoint 2007 as well. But it’s just way cooler with SharePoint 2010. You know when you get an error in SharePoint 2010 it shows you an error like as below: Well, the ULS Viewer will allow you to set filtering critereon, allowing you to immediately zero in, into an error, across multiple WFEs even. Also there are numerous other facilities built into the tool, such as advanced filtering, critical error notifications, etc. A must have! You can read the documentation of the ULSViewer here. 3. SPDisposeCheck: Did you know that the MySite object is strange? What is strange about it? That you have to dispose it even if you didn’t create it!? Well who the hell remembers all that! Honestly I do! And you should too. But there is a tool to help you sanitize your code. And that is SPDisposeCheck. You run it against your DLL or EXE, and it will give you suggestions on where you might have missed calling dispose on an object. You still have to use your head, but having this tool helps. 4. DebugView: Debugging for SharePoint can be difficult sometimes. Sometimes your breakpoints don’t get hit. And while you can try and make them hit, it is sometimes easier to just write a bunch of Debug.WriteLines, and catch them from an external application such as DebugView. You simply use your code, and DebugView will catch all the Debug.WriteLine’s in your code like this - 5. BGInfo: One annoying thing about SharePoint projects, it causes the number of servers to multiply like bunnies. As I’m RDP’ing into many computers trying to diagnose a crazy issue, sometimes it becomes hard to remember which machine is which. BGInfo puts all that on the wallpaper, alongwith a bunch of other useful info. A bit like this - 5. WSPBuilder: SharePoint 2007 only, but I think there maybe a version for SP2010 coming later. I think the VS2010 tools for SP2010 development are quite nice, so WSPBuilder, well so far I don’t miss it. But lets see what WSPBuilder for 2010 brings – I haven’t seen it yet. However, I want to confidently assert that WSPBuilder for SP2007 is simply awesome. 6. SharePoint Manager: The SharePoint Manager 2010 is a SharePoint object model explorer. It enables you to browse every site on the local farm and view every property. It also enables you to change the properties. The VS2010 dev tools now include a server explorer, which show you a subset of properties in read-only. I would LOVE to see SharePoint manager like functionality built into VS2010. SharePoint Manager, a total must-have. Comment on the article ....

    Read the article

  • SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log

    - by pinaldave
    I have been dreaming of writing book for really long time, and I finally got the chance – in fact, two chances!  I recently wrote two books: SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 [Amazon] | [Flipkart] | [Kindle] and SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues [Amazon] | [Flipkart] | [Kindle].  I had a lot of fun writing these two books, even though sometimes I had to sacrifice some family time and time for other personal development to write the books. The good side of writing book is that when the efforts put in writing books are recognize by books readers and kind organizations like expressor studio. Book Signing Event Book writing is a complex process.  Even after you spend months, maybe years, writing the material you still have to go through the editing and fact checking processes.  And, once the book is out there, there is no way to take back all the copies to change mistakes or add something you forgot.  Most of the time it is a one-way street. Book Signing Event Just like every author, I had a dream that after the books were written, they would be loved by people and gain acceptance by an audience. My first book, SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008, is extremely popular because it helps lots of people learn various fundamental topics. My second book covers beginning to learn SQL Server Wait Stats, which is a relatively new subject. This book has had very good acceptance in the community. Book Signing Event Helping my community is my primary focus, so I was happy to see this year’s SQLPASS tag line: ‘This is a Community.‘ At the event, the expressor studio guys came up with a very novel idea. They had previously used my books and they had found them very useful. They got 100 copies of the book and decided to give it away to community folks. They invited me and my co-author Rick Morelan to hold a book signing event. We did a book signing on Thursday between 1 pm and 2 pm. Book Signing Event This event was one of the best events for me. This was my first book signing event outside of India. I reached the book signing location around 20 minutes before the scheduled time and what I saw was a big line for the book signing event. I felt very honored looking at the crowd and all the people around the event location. I felt very humbled when I saw some of my very close friends standing in the line to get my signature. It was really heartwarming to see so many enthusiasts waiting for more than an hour to get my signature. While standing in line I had the chance to have a conversation with every single person who showed up for the signature. I made sure that I repeated every single name and wrote it in every book with my signature. There is saying that if we write a name once we will remember it forever. I want to remember all of you who saw me at the book signing. Your comments were wonderful, your feedback was amazing and you were all very supportive. Book Signing Event I have made a note of every conversation I had with all of you when I was signing the books. Once again, I just want to express my thanks for coming to my book signing event. The whole experience was very humbling. On the top of it, I want to thank the expressor studio people who made it possible, who organized the whole signing event. I am so thankful to them for facilitating the whole experience, which is going to be hard to beat by any future experience. My books Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • F# in ASP.NET, mathematics and testing

    - by DigiMortal
    Starting from Visual Studio 2010 F# is full member of .NET Framework languages family. It is functional language with syntax specific to functional languages but I think it is time for us also notice and study functional languages. In this posting I will show you some examples about cool things other people have done using F#. F# and ASP.NET As I am ASP/ASP.NET MVP I am – of course – interested in how people use different languages and technologies with ASP.NET. C# MVP Tomáš Petrícek writes about developing ASP.NET MVC applications using F#. He also shows how to use LINQ To SQL in F# (using F# PowerPack) and provides sample solution and Visual Studio 2010 template for F# MVC web applications. You may also find interesting how you can create controllers in F#. Excellent work, Tomáš! Vladimir Matveev has interesting example about how to use F# and ApplicationHost class to process ASP.NET requests ouside of IIS. This is simple and very straight-forward example and I strongly suggest you to take a look at it. Very cool example is project Strom in Codeplex. Storm is web services testing tool that is fully written on F#. Take a look at this site because Codeplex offers also source code besides binaries. Math Functional languages are strong in fields like mathematics and physics. When I wrote my C# example about BigInteger class I found out that recursive version of Fibonacci algorithm in C# is not performing well. In same time I made same experiment on F# and in F# there were no performance problems with recursive version. You can find F# version of Fibonacci algorithm from Bob Palmer’s blog posting Fibonacci numbers in F#. Although golden spiral is useful for solving many problems I looked for some practical code example and found one. Kean Walmsley published in his Through the Interface blog very interesting posting Creating Fibonacci spirals in AutoCAD using F#. There are also other cool examples you may be interested in. Using numerical components by Extreme Optimization  it is possible to make some numerical integration (quadrature method) using F# (also C# example is available). fsharp.it introduces factorials calculation on F#. Robert Pickering has made very good work on programming The Game of Life in Silverlight and F# – I definitely suggest you to try out this example as it is very illustrative too. Who wants something more complex may take a look at Newton basin fractal example in F# by Jonathan Birge. Testing After some searching and surfing I found out that there is almost everything available for F# to write tests and test your F# code. FsCheck - FsCheck is a port of Haskell's QuickCheck. Important parts of the manual for using FsCheck is almost literally "adapted" from the QuickCheck manual and paper. Any errors and omissions are entirely my responsibility. FsTest - This project is designed to Language Oriented Programming constructs around unit testing and behavior testing in F#. The goal of this project is to create a Domain Specific Language for testing F# code in a way that makes sense for functional programming. FsUnit - FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework. xUnit.NET - xUnit.net is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via TestDriven.net, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. Getting started Well, as a first thing you need Visual Studio 2010. Then take a look at these resources: F# samples @ MSDN Microsoft F# Developer Center @ MSDN F# Language Reference @ MSDN F# blog F# forums Real World Functional Programming: With Examples in F# and C# (Amazon) Happy F#-ing! :)

    Read the article

  • Agile Manifesto, Revisited

    - by GeekAgilistMercenary
    Again, conversations give me a zillion things to write about.  The recent conversation that has cropped up again is my various viewpoints of the Agile Manifesto.  Not all the processes that came after the manifesto was written, but just the core manifesto itself.  Just for context, here is the manifesto in all the glory. We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Several of the key signatories at the time went on to write some of the core books that really gave Agile Software Development traction.  If you check out the Agile Manifesto Site and do a search for any of those people, you will find a treasure trove of software development information. My 2 Cents First off, I agree with a few people out there.  Agile is not Scrum for instance.  Do NOT get these things confused when checking out Agile, or pushing forward with Scrum.  As David Starr points out in his blog entry, "About 35 minutes into this discussion, I realized I hadn?t heard a question or comment that wasn?t related to Scrum. I asked the room, ?How many people are on an agile team that is NOT using Scrum?? 5 hands. Seriously, out of about 150 people of so. 5 hands." So know, as this is one of my biggest pet peves these days, that Scrum is not Agile.  Another quote David writes, "I assure you, dear reader, 2 week time boxes does not an agile team make." This is the exact problem.  Take a look at the actual manifesto above.  First ideal, "Individuals and interactions over processes and tools".  There are a couple of meanings in this ideal, just as there are in the other written ideals.  But this one has a lot of contention with a set practice such as Scrum.  There are other formulas, namely XP (eXtreme) and Kanban are two that come to mind often.  But none of these are Agile, but instead a process based on the ideals of Agile. Some of you may be thinking, "that?s the same thing".  Well, no, it is not.  This type of differentiation is vitally important.  Agile is a set of ideals.  Processes are nice, but they can change, they may work for some and not others.  The Agile Manifesto covers the ideals behind what is intended, that intention being to learn and find new ways to build better software. Ideals, not processes.  Definition versus implementation.  Class versus object.  The ideals are of utmost importance, the processes are secondary, the first ideal is what really lays this out for me "Individuals and interactions over processes and tools".  Yes, we need tools but we need the individuals and their interactions more. For those coming into a development team, I hope you take this to mind.  It is of utmost importance that this differentiation is known and fought for.  The second the process becomes more important than the individuals and interactions, the team will effectively lose the advantages of Agile Ideals. This is just one of my first thoughts on the topic of Agile.  I will be writing more in the near future about each of the ideals.  I will make a point to outline more of my thoughts, my opinions, and experience with the ideals of Agile and the various processes that are out there.  Maybe, I may stumble upon something new with the help of my readers?  It would be a grand overture to the ideals I hold. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >