Search Results

Search found 26098 results on 1044 pages for 'self build'.

Page 727/1044 | < Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >

  • Blog Rebranding

    I have been spending more and more time on learning as much as I can on Agile Development and also have been fairly immersed in rolling out TFS 2010 in our environment.  I feel like it is time to talk about some of my experiences.  With that, I am rebranding my blog to focus on these topics.  I am going to start with a bunch of blogs on the process I have gone through getting TFS 2010 configured for our development teams. Last week, Brian Harry was in our office and gave a great talk on the improved tools in TFS 2010 and how Microsoft uses the tools internally.  I followed that up with a high-level overview of the improved out of the box process templates and the process to customize them.  I am definitely very excited about the new features in 2010 and hopefully will keep up my motivation to blog about it.  I am writing my first post right now about the process I went through to build a task progress report based on the user story progress report in the MSF for Agile Development template.  Stay tunedDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 1360x768x32 Resolution in Windows 8 in VirtualBox

    - by mbcrump
    My Lenovo ThinkPad's built-in screen maxes at 1366x768x32. I wanted to use that same resolution with Windows 8 Developer Preview inside of VirtualBox. So, what did I do? Downloaded the latest build of VirtualBox v4.1.6 (because it supports Windows 8 x64) Installed Windows 8 Developer Preview in VirtualBox as I did earlier this year. Installed Guest Additions. Ran the CustomVideoMode described in this blog post. …and quickly found out that I didn’t have the option to use 1366x768x32 inside of VirtualBox despite using the following command: VBoxManage.exe setextradata  "[Virtual Machine Name]" CustomVideoMode1 1920x1080x32   So how do you fix it? If you do a little research on this resolution, then you will find it is a non-standard resolution. Even if you run the command: VBoxManage.exe setextradata "[Virtual Machine Name]" CustomVideoMode1 1366x768x32 It will still not show that resolution inside of VirtualBox. You can fix this easily by using the following command as shown below: VBoxManage.exe setextradata "[Virtual Machine Name]" CustomVideoMode1 1360x768x32 I hope that you noticed the command used the resolution of 1360 instead of 1366. Now if you go to your display option for Windows 8 inside of Virtualbox then you can select that resolution. Anyways, I hope this helps someone with a similar problem. I created this blog partially for myself but it is always nice to help my fellow developer.  Thanks for reading. Subscribe to my feed

    Read the article

  • strlen returns incorrect value when called in gdb

    - by alesplin
    So I'm noticing some severely incorrect behavior from calls to standard library functions inside GDB. I have the following program to illustrate: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]) { char *s1 = "test"; char *s2 = calloc(strlen("test")+1,sizeof(char)); snprintf(s2,strlen("test")+1,"test"); printf("string constant: %lu\n", strlen(s1)); printf("allocated string: %lu\n", strlen(s2)); free(s2); return 0; } When run from the command-line, this program outputs just what you'd expect: string constant: 4 allocated string: 4 However, in GDB, I get the following, incorrect output from calls to strlen(): (gdb) p strlen(s1) $1 = -938856896 (gdb) p strlen(s2) $2 = -938856896 I'm pretty sure this is a problem with glibc shipped with Ubuntu (I'm using 10.10), but this is a serious problem for those of us who spend lots of time in GDB. Is anyone else experiencing this kind of error? What's the best way to fix it? Build glibc from source? (I'm already running a version of GDB built from source)

    Read the article

  • Raspberry Pi based Hadoop cluster

    - by Dmitriy Sukharev
    Is it at least possible to build Hadoop cluster from Raspberry Pi-based nodes? Can such a cluster meet hardware requirements of Hadoop? And if so, how much Raspberry Pi nodes are required to meet requirements? I understand that a cluster from several Raspberry Pi nodes being cheap is not powerful. My purpose is to organize cluster without possibility of loosing personal data from my desktop or notebook, and to use this cluster studying Hadoop. I'd appreciate if you suggest any better ideas of organizing a cheap Hadoop cluster for studying purposes. UPD: I've seen that recommended amount of memory for Hadoop is 16-24GB, multi-core processors, and 1TB of HDD, but it doesn't look like minimal requirements.

    Read the article

  • Is PCI Express x4 faster or slower than a standard PCI slot for graphic cards?

    - by Stephen R
    I am looking at potential motherboards for a computer I want to build and ran into this conundrum. The motherboard has two PCI Express slots that allow for 16 channel cards to fit in them. The catch is only one of them operates at 16 channels, the other operates only 4 channels. My question is, would it be faster to buy a PCI Express graphic card and install it in the 4 channel PCI Express slot? Or would it be better to buy a standard PCI graphic card and install it in one of the available PCI slots?

    Read the article

  • Mahout - Clustering - "naming" the cluster elements

    - by Mark Bramnik
    I'm doing some research and I'm playing with Apache Mahout 0.6 My purpose is to build a system which will name different categories of documents based on user input. The documents are not known in advance and I don't know also which categories do I have while collecting these documents. But I do know, that all the documents in the model should belong to one of the predefined categories. For example: Lets say I've collected a N documents, that belong to 3 different groups : Politics Madonna (pop-star) Science fiction I don't know what document belongs to what category, but I know that each one of my N documents belongs to one of those categories (e.g. there are no documents about, say basketball among these N docs) So, I came up with the following idea: Apply mahout clustering (for example k-mean with k=3 on these documents) This should divide the N documents to 3 groups. This should be kind of my model to learn with. I still don't know which document really belongs to which group, but at least the documents are clustered now by group Ask the user to find any document in the web that should be about 'Madonna' (I can't show to the user none of my N documents, its a restriction). Then I want to measure 'similarity' of this document and each one of 3 groups. I expect to see that the measurement for similarity between user_doc and documents in Madonna group in the model will be higher than the similarity between the user_doc and documents about politics. I've managed to produce the cluster of documents using 'Mahout in Action' book. But I don't understand how should I use Mahout to measure similarity between the 'ready' cluster group of document and one given document. I thought about rerunning the cluster with k=3 for N+1 documents with the same centroids (in terms of k-mean clustering) and see whether where the new document falls, but maybe there is any other way to do that? Is it possible to do with Mahout or my idea is conceptually wrong? (example in terms of Mahout API would be really good) Thanks a lot and sorry for a long question (couldn't describe it better) Any help is highly appreciated P.S. This is not a home-work project :)

    Read the article

  • What is the best book for the preparation of MCPD Exam 70-564 (Designing and Developing ASP.NET 3.5 Applications)?

    - by Steve Johnson
    Hi all, I have seen a couple of questions like this one and scanned through the answers but somehow the replies were not satisfactory or practical. So i wondered maybe people who have gone through it and may suggest a better approach for the preparation of this exam. Goal: My goal is actually NOT merely to pass that exam. I intend to actually master the skill. I have been into asp.net web development for approximately 1.5 years and I want to study something that really improves "Design and Development Skills" in Web Development in general and asp.net to be specific which i can put to use and build upon that. Please suggest a book that teaches professional Asp.Net design and development skills and approaches to quality development by taking through practice design scenarios and their solutions and through various case studies that involve design problems and their implemented solutions. Edit: I have found the Micorosoft training kits to be fairly interesting and helpful as these tend to increase knowledge. I have utilized a lot of things after getting a good explanation of things from the training kits. However, as far as Microsoft Training Kit for 70-564 is concerned, there are not a lot of good reviews about it. What i have read and searched on the net , the reviews on amazon and various forums, stack-exchange and experts-exchange, were more inclined to the conclusion that "Microsoft Training Kit for Exam 70-564 is not good. Its is not good as compared to other kits from Microsoft, like as compared to the training kit of Exam 70-562 or others." So i was looking for a proper book containing examples from practical world scenarios and case studies from which i can not only learn but also master the skills before wasting money of Microsoft Training Kit for Exam 70-564. Waiting for experts to provide a suitable advice.

    Read the article

  • Ubuntu (desktop) server needed for school

    - by Dave Moody
    I need to build an ubuntu server for school, but I hate with a passion command line interfaces. I need a server with a desktop, like the current win server 2003 (which has crashed, ugh). This server needs to host the school lan domain, have an active directory, and handle the roaming profiles that the students and staff access through WinXP workstations. It also needs to work as a file server, maybe a print server. Can anyone advise me on how to create an Ubuntu server starting with Ubuntu Desktop edition? Much thanks if you respond... Dave.

    Read the article

  • "TMGR is Missing" after repair-installing Windows XP

    - by djzmo
    I have two OSes installed in my computer. - Windows XP Professional - Windows 7 Ultimate (Release Candidate 1/Build 7100) I used the Windows 7 boot loader by default to choose between OSes. When I was using my WinXP, my computer gets lagged suddenly and continuously, and the only way to fix it is by repair-installing it (because I've experienced this many times before, but without W7 installed). Everything goes OK. But when my XP was successfully reinstalled, I cannot boot my Windows 7 anymore. Every time I tried to boot the harddisk that contains W7, an error appeared. "TMGR is Missing". Now I have no idea how can I get back to my Windows 7. Any kind of help would be appreciated! :)

    Read the article

  • Can't connect to IIS7 on one of my machines

    I have 2 computers, both running Win7 Professional x64. Computer "A" (192.168.0.10) is my work machine, it contains my tools etc. Computer "B" (192.168.0.15) is supposed to be my build server / web server for my projects I've installed IIS on "B", and installed IIS7 remote manager on "A". I'm trying to connect from A's IIS7 Manager to B's IIS but I fail. I have little knowledge of IIS, and I feel that's the main reason. I can ping B from A and get positive results so the machines do see each other. A and B are in the same workgroup but not in a domain (if that matters) - they're in my home network. What do I need to do to "see" B's IIS?

    Read the article

  • Win Server 2008 R2 - Mapped shared folder hanging?

    - by M-Tech
    I have recently built a windows 2008 server R2 machine. This is purely for file server purposes and is very much a basic build. All windows updates installed and part of domain. I have setup a shared folder on the C:Drive and added permissions for domain users as co-owners. The client machines run XP SP3 and are part of the domain also. We have a few servers running the same setups on a few of our sites but this one is particular crashes users machines (explorer.exe hangs for at least a few mins) when attempting to access the shared folder. I have turned off the option on the network card for power save aswell still no change. Any help with this is very much appreciated and i look forward to hearing from you ;)

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • How do I get debuild to put the binary in /usr/bin?

    - by SammySP
    I have been recently trying to package a small Python utility to put on my PPA and I've almost got it to work, but I'm having problems in making the package install the binary (a chmod +x Python script) under /usr/bin. Instead it installs under /. I have this directory structure - http://db.tt/0KhIYQL. My package Makefile is like so: TARGET=usr/bin/txtrevise make: chmod +x $(TARGET) install: cp -r $(TARGET) $(DESTDIR) I've used $(DESTDIR), as I understand it to place the file under the debian subdir when debuild is run. I have the txtrevise script, my executable, under usr/bin folder under the root of my package. I also have the Makefile and usr/bin/textrevise in my tarball: txtrevise_1.1.original.tar.gz. However when I build this and look inside of the Debian package, txtrevise is always at the root of the package instead of under usr/bin and will be installed to / instead of /usr/bin. How can I get debuild to put the script in the right place? Thanks. Any help would be greatly appreciated. I'm stumped.

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • How to read reduce/shift conflicts in LR(1) DFA?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.293 in the 2nd edition) and I moved forward from my last question: How to get lookahead symbol when constructing LR(1) NFA for parser? Now I have such "problem" (maybe not a problem, but rather need of confirmation from some more knowledgeable people). The authors present state in LR(0) which has reduce/shift conflict. Then they build DFA for LR(1) for the same grammar. And now they say it does not have a conflict (lookaheads at the end): S -> E . eof E -> E . - T eof E -> E . - T - and there is an edge from this state labeled - but no labeled eof. Authors says, that on eof there will be reduce, on - there will be shift. However eof is for shift as well (as lookahead). So my personal understanding of LR(1) DFA is this -- you can drop lookaheads for shifts, because they serve no purpose now -- shifts rely on input, not on lookaheads -- and after that, remove duplicates. S -> E . eof E -> E . - T So the lookahead for reduce serves as input really, because at this stage (all required input is read) it is really incoming symbol right now. For shifts, the input symbols are on the edges. So my question is this -- am I actually right about dropping lookaheads for shifts (after fully constructing DFA)?

    Read the article

  • Is there a way to get Drobo-like functionality out of ZFS (or some other free FS)? [closed]

    - by Steve Rowe
    I really like the concept of the Drobo, I just don't like the speed. I want the redundancy and easy upgradeability of the Drobo, but faster. I would love to be able to build something on my own. ZFS seems like a good place to start, but it has either upgradeability or redundancy (RAIDZ) but not both. To clarify, I want to be able to have an array of disks which are expandable by just adding a drive and have redundancy built in. I found instructions for making zfs act like a Drobo, but they are quite complicated and upgrading is a lot of work. Has anyone automated something like what is described there? Is there a different file system I should be looking at?

    Read the article

  • Get lots of javascript problems when using Opera 11.00 to surf

    - by s hanley
    Sites like ebay and even superuser stop working properly when I use opera 11.00. Menus stop working everywhere from ebay to godaddy. Hovering on a menu item doesn't expand it, no sub menu slides out. This makes a large number of very popular websites unusable. Am I right in assuming this is a javascript issue? I use opera for the turbo feature (I have tested opera with and without turbo so it's not turbo's fault) because I'm on mobile broadband until I get my phone line sorted out. Turbo helps me save money, as well as allowing me to surf at a sane speed. Is there a firefox or chrome equivalent to opera turbo that doesn't cost money? I'm using Opera 11.00, build 1156.

    Read the article

  • Oracle Weblogic 12c for New Projects–Webcast November 7th 2013

    - by JuergenKress
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications. WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deploying enterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013   9am UTC/11am EET REGISTER NOW WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: education,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Form development optimization

    - by Juan
    Like many web developers I do forms all the time. I found myself doing the same all the time: placing input fields, assigning a name to each, ajax the form, then create the PHP which involves to assign a PHP var to each $_REQUEST['var'], escape and validate data, build the html and emailing the results. So I found that 70% of the work is duplicated but I just can't duplicate a page and change the fields. I end up wasting more time reformatting, deleting and adding different fields than creating from scratch. I started planing to program a "list of IDs to html+php" converter in which I'd input all the IDs and this would output the basic html and php. Then I thought: there's got to be thousands of developers that go through this, I'd be reinventing the wheel. So this is my question, I'm trying to find that wheel that somebody must have invented already. I found this: http://www.trirand.com/blog/jqform/ which does more or less what I'm looking for but it's an expensive solution and it has too much functionality for what I'd be using it. Which tools do you use to optimize repetitive task about HTML and PHP?

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Recursively copy only new or changed files

    - by Niklas
    I'm sure this must have been asked and answered before, but I just can't find it right now... I have a Visual Studio post-build action that currently does a recursive copy (using xcopy) of an output folder to a different folder. This takes longer than I like and I'd like to only copy newly created and newer (changed) files each time (which xcopy doesn't seem to support). I cannot depend on any not-installed-by-default tool since the solution is used by different developers on different machines. What would a superuser do?

    Read the article

  • Bayesian content filter for vbulletin [on hold]

    - by mc0e
    I've been tasked with coming up with a tool to automatically flag some posts for moderator attention on a large vbulletin forum. It's not spam per se, but the task has a lot in common with the sort of handling that might be done by a spam protection plugin (a mod in vbulletin speak). There's only so much I can say, but the task does not involve bad users, so much as particular kinds of posts which the moderators need to be aware of. Filtering out user registrations and links is therefore not useful, and we are talking about posts by real human users. What I'm looking for is an existing bayesian classification plugin, or something that I can study to get an understanding of how to do the vbulletin side of the interface in order to build such a thing. Ie I'd need ways for moderators to list flagged posts, and to correct the classification of posts which have been mis-classified. Ideally I want a 3 way split with an "unsure" category in order to reduce what has to be reviewed to find any mis-classifications. Any pointers? I've searched around a bit, and so far what I've found has been more or less entirely targetted at intervening in sign-ups (mostly using stopforumspam), captchas, and use of external services like akismet which are spam specific. I'm also considering an external solution, which might be ableto be interfaced i

    Read the article

  • Stylecop 4.7.38.0 has been released

    - by TATWORTH
    Stylecop  4.7.38.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Move Registry functions into common Utils class. Styling fixes.Dictionary updatesStyling fixes.Update Styling.Styling fixes.Update docs.Spelling fixes in our own source.Add solution specific spellings to our own Settings.StyleCopDeploy more up to date spelling checkers and dictionaries.Update our own StyleCop and dictionaries for analyzing our own build.Update the custom dictionaries.Update the spellchecker to work for 32 or 64 bit processes.Update latex parser.Update the latex parser for $$...$$Fix the latex parser to allow any char between $ and $Add a new tab to the settings editor to add/remove spelling words. Ignore words starting and ending with a '$'. Add support for our own recognized words in the settings file. If the spelling library can't load then dont analyse the spellings and fail gracefully.Fix for 7398. Insert the correct type-name in the example for summary.Fix for 7396. Added new tests. All doc elements to end with <c> elements and not be reported for lack of white-space or too short.

    Read the article

  • Pop current directory until specific file is found

    - by edarroyo
    I'm interested in writing a script with the following behavior: See if a file, build.xml, exists in the current directory, if so run a command supplied through arguments to the script. If not, pop the current directory and look at the parent. Go to 1. The script would end once we find the file or we reach the root. Also, once the script finishes and control comes back to the user I want the current directory to be that one where the script was initially l I'm not very familiar with shell scripting but any help/guidance will be much appreciated.

    Read the article

  • Been doing .NET for several years and am thinking about a platform change. Where do people suggest I go?

    - by rsteckly
    Hi, I've been programming in .NET for several years now and am thinking maybe its time to do a platform switch. Any suggestions about which platform would be the best to learn? I've been thinking about going back to C++ development or just focusing on T-SQL within the Microsoft stack. I'm thinking of switching because: a) I feel that the .NET platform is increasingly becoming commodified--meaning that its more about learning a GUI and certain things to click around than really understanding programming. I'm concerned that this will lend itself to making developers on that stack increasingly paid less. b) It's very frustrating to spend your entire day essentially debugging something that should work but doesn't. Usually, Microsoft releases something that suggests anyone can just click here and there and poof there's your application. Most of the time it doesn't work and winds up sucking so much more time than it was supposed to save. c) I recently led a team in a small startup to build a WPF application. We were really hit hard with people complaining about having to download the runtime. Our code was also not portable to any other platform. Added to which, the ram usage and slowness to load of the app was remarkable for its size. I researched it and we could not find a way to optimize it. d) I'm a little concerned about being wedded to the Windows platform. What are the pros and cons of adding another platform and which platform do people suggest? Thanks!

    Read the article

< Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >