Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 688/1328 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Which operating systems book should I go for?

    - by pecker
    Hi, I'm in a confusion. For our course (1 year ago) I used Stallings. I read it. It was fine. But I don't own any operating system's book. I want to buy a book on operating systems. I'm confused!! which one to pick? Modern Operating Systems (3rd Edition) ~ Andrew S. Tanenbaum (Author) Operating System Concepts ~ Abraham Silberschatz , Peter B. Galvin, Greg Gagne Operating Systems: Internals and Design Principles (6th Edition) ~ William Stallings I've plans of getting into development of realworld operating systems : Linux, Unix & Windows Driver Development. I know that for each of these there are specific books available. But I feel one should have a basic book on the shelf. So, which one to go for?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • How does the workflow between testers doing testing and coders doing the coding for pending testing

    - by dotnetdev
    In a large company that does software development, they often have dedicated teams for build management, testing, development, and so forth. Agile or not, how does this workflow amongst teams work? I mean would the test team write unit tests and then the dev team write code to adhere to these tests (basically TDD)? And then the test team may write tests for a completely different project or have a slight quiet period until the dev team have done their coding. What possible workflows are there? This is something that interests me greatly. I know that in my current company we are doing it incorrectly (we have 1 tester about 5 devs, which is small scale) but I am not sure how exactly to draw out the ideal workflow. Many (ok, an ex-Project Manager) have tried, but all failed.

    Read the article

  • ASP.NET MVC project won't start under IIS 5.1 on Windows XP SP3

    - by mrjoltcola
    I've a ASP.NET MVC 2 project that runs fine under Windows 7 and will start on Windows XP if I use the Visual Studio Development Server, however, starting under IIS generates an error: Unable to start debugging on the web server With the message The specified procedure could not be found No errors in the system event viewer. If I start without debugging I get an "HTTP 500 Internal Server Error" The reason I run it under IIS is the project also includes some WCF wsHttp web services that use certificates, so the VS Development Server is not adequate for hosting those. I have already seen the links on SO that talk about adding the wildcard mapping. I've already done that, just as I've done on Windows Server 2003 where I successfully host ASP.NET MVC RC2 for quite a while.

    Read the article

  • Is MonoTouch worth the cost or should I just learn Objective-C?

    - by jamesaharvey
    After sitting through a session today on Mono at a local .Net event, the use of MonoTouch was 'touched' upon as an alternative for iPhone development. Being very comfortable in C# and .Net, it seems like an appealing option, despite some of the quirkiness of the Mono stack. However, since MonoTouch costs $400, I'm somewhat torn on if this is the way to go for iPhone development. Anyone have an experience developing with MonoTouch and Objective-C, and if so is developing with MonoTouch that much simpler and quicker than learning Objective-C, and in turn worth the $400?

    Read the article

  • Scrum: What if the Product Owner has tasks?

    - by Lauren J
    I have just started working with a team that has picked up some aspects of Scrum (two week timeboxing) but not others (the team does not currently agree to all estimates or to the number of points in a sprint, but I'll change this soon.) The product owner is also a technical resource (scientist) with some development background. Is it appropriate to have the product owner's tasks (which mostly involve research) mixed in with the team's tasks (some of which are research and some development). I have looked at a lot of resources and not found an answer. Thanks!

    Read the article

  • Conditional XAML

    - by Nicholas
    For easy of development I'm using a ViewBox to wrap all content inside a Window. This is because my development machine has a smaller screen than the deployment machine so using a ViewBox allows for better realisation of proportion. Obviously there is no reason for it to be there on Release versions of the code. Is there an easy method to conditionally include/exclude that 'wrapping' ViewBox in XAML? E.g. <Window> <Viewbox> <UserControl /*Content*/> </Viewbox> </Window>

    Read the article

  • "Use of undefined constant CURLOPT_PROTOCOLS and CURLPROTO_HTTP" but it works?

    - by Dave
    Hi on our dev environment we have show all errors, warnings and notices. I'm getting this: Notice: Use of undefined constant CURLOPT_PROTOCOLS - assumed 'CURLOPT_PROTOCOLS' in C:\notion\implementation\development\asterix\library\ExternalLibs\panda.php on line 69 Notice: Use of undefined constant CURLPROTO_HTTP - assumed 'CURLPROTO_HTTP' in C:\notion\implementation\development\asterix\library\ExternalLibs\panda.php on line 69 The code on line 69: curl_setopt($curl, CURLOPT_PROTOCOLS, CURLPROTO_HTTP); But the CURL code works, it goes off to the other server and retrieves whats necessary. What do these notices mean? Thanks very much.

    Read the article

  • mongoid with rails - Database should be a Mongo::DB, not NilClass"

    - by Adam T
    Greetings I am trying to get Mongoid to work with my Rails app and I am getting an error: "Mongoid::Errors::InvalidDatabase in 'Shipment bol should be unique' Database should be a Mongo::DB, not NilClass" I have created the mongoid.yml file in my config directory and have mongodb running as a daemon. The config file is like so: defaults: &defaults host: localhost development: <<: *defaults database: ship-it-development test: <<: *defaults database: ship-it-test production: <<: *defaults host: <%= ENV['MONGOID_HOST'] % port: <%= ENV['MONGOID_PORT'] % database: <%= ENV['MONGOID_DATABASE'] % All of my specs fail with the above error. I am using rails 2.3.8. Anyone have ideas?

    Read the article

  • Asp.Net MVC2 TekPub Starter Site methodology question

    - by Pino
    Ok I've just ran into this and I was only supposed to be checking my emails however I've ended up watching this (and not far off subscribing to TekPub). http://tekpub.com/production/starter Now these app is a great starting point, but it raises one issue for me and the development process I've been shown to follow (rightly or wrongly). There is no conversion from the LinqToSql object when passing data to the view. Are there any negitives to this? The main one I can see is with validation, does this cause issues when using MVC's built in validation as this is somthing we use extensivly. Because we are using the built in objects generated by LinqToSql how would one go about adding validation, like [Required(ErrorMessage="Name is Required")] public string Name {get;set;} Interested to understand the benifits of this methodology and any negitives that, should we take it on, experiance through the development process.

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Is it possible to call a user-space callback function from kernel space in Linux (ioctl)?

    - by Makis
    Is it possible to expand the ioctl interface in Linux so that the user-space application can send a pointer to a function to the kernel space driver? I'm in particular thinking of ways to handle the stream in user-controllable way but doing it in the kernel. Those operations could be attached to the kernel module but this would make development a lot easier as I wouldn't need to mess with the kernel during development. More specifically, this would be the process: Data is read by the driver to a buffer. Data is handled by these user-defined functions in place. Some more handling is done, possibly with some HW blocks. Data is used by a user-space application.

    Read the article

  • Why isn't Passenger respecting my custom log format?

    - by Millisami
    I need to change the log format of my rails app. I put this file in lib directory and required it in development.rb env file. require 'hodel_3000_compliant_logger' config.logger = Hodel3000CompliantLogger.new(config.log_path) and I should get the output of the development.log file as follows: Jun 28 03:05:13 millisami-notebook rails[18243]: Memory usage: 86888 | PID: 18243 I get this exact log when I start my app with script/server (Mongrel). But when I run the app via Passenger, the format being logged is Rails' default. Why doesn't Passenger write to the log file like Mongrel does?

    Read the article

  • Does the 80/20 rule of time management apply to developers?

    - by Dean
    Jeff's recent article linked to a time management example of the First Fit Decreasing algorithm, which talked about the Pareto principle (or, the 80/20 rule) of time management, that is, that 80% of the work we produce in 20% of our time. Now we've all heard the programmer quote: The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time. But all jokes aside, it is often as if 20% of your code is to do what you want, and the other 80% is to handle exceptions... so does the 80/20 rule really apply to developers? Does anyone have any examples of why it does / does not apply to us?

    Read the article

  • visual studio localhost server can't locate file

    - by mhenk
    i have a very simple web project. just one htm file with some javascript that opens a file test.xml. the xml file is in the same folder as the htm file (and it is part of the project) but when i start the page (f5 or ctrl-f5) it can't find the xml. it starts as http://localhost:50586/main.htm when i do a folder list the test.xml file is right there. opening the page directly in firefox works fine and the script can read and extract data from the xml. how can i convince the development server that the xml is indeed there?? or better still: how can i turn the development server off entirely? for my purposes simply opening the page in my browser is all i need and that should happen from within vs of course.

    Read the article

  • Mercurial between server and local?

    - by artmania
    I have a portal development work in process... I had some troubles time to time like losing, overwriting wrong files, etc... So I decided to go for Mercurial for this development. My first experience with Source Control. I work on server [bluehost] for this project, is there any way to keep update backups at local? Do I have to setup Mercurial to Bluehost? any way to sync changes on server to my local mac?

    Read the article

  • Modify installed SharePoint feature

    - by Laura L
    I have written a sequential workflow in SharePoint on our development environment. After testing, we decided to deploy this workflow as a feature on the staging environment. We did the following: copied the strongly named assembly to the GAC using gacutil copied feature.xml and workflow.xml to WebServerExtensions/12/templates/features/someFolder installed feature (stsadm command) activated feature (stsadm command) All worked exactly as planned and the workflow behaved correctly. The problem was, we decided to change something in the code (a message was not very self explanatory), so on the development machine we updated the message as requested and rebuilt the project. The problem is, we cannot seem to find a way to correctly get rid of the previous version of this workflow/feature. To deploy the upgrade, we: deactivated and uninstalled the feature (stsadm commands), removed also from GAC. increased the version of the assembly performed steps 1 to 4 from above. When using the workflow we are still getting the first message, we cannot find a way to get the new message to be displayed. What are we missing?

    Read the article

  • asp.net mvc userinteractive problem

    - by niao
    Greetings, in my mvc application when I try to connect to wcf service I get this error It is invalid to show a modal dialog or form when the application is not running in UserInteractive mode. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application. The problem only occurs on production server (Win2008, IIS7). On my development server (WinXp, IIS6) it works ok. Additionally. I can connect from asp.net mvc development to wcf production service. I can't do this od production server (asp mvc production to production wcf service)

    Read the article

  • Tutorials/Books on using Mono to develop RESTful webservices?

    - by max
    Hi, anyone out there got any pointers to good links/tutorials/books on developing webservices with Mono? In more detail, I am interested in using Mono from project start on a Linux host developing in C# using Visual Studio for development, ideally with remote debugging if that is realistic developing web-services in MONO accessible in a RESTful manner, returning JSON hiding the services processes behind an Apache access the services either via javascript/AJAX or from a thin script layer written in PHP scalability is important for me unit-testing of webservices Any recommendations for material I could sift through to get a good head-start? I might add that I'm C#/.NET savvy, but not in the context of web development. I've been using it since it came out, but mainly for internal server-client applications where the clients were Windows desktop apps and the communication layer was remoting or, sometimes, more low-level socket-based. Thanks, max

    Read the article

  • advice on setting up SVN

    - by Vivek Chandraprakash
    I'm trying to setup an svn server. I maintain couple of websites based on asp. There are three environments currently. Development: Any new modules/enhancements will be done in this environment Staging: Mirror of production Production: The public facing website. Currently when there's an update to the website, this is what we do do the update in development copy file to staging copy file to production In production we take a backup of the old file by renaming it. I would like to make it simpler by installing SVN and stop the file renaming thing. But im not sure how many repositories to have per website. should be it be three or two? I'm absolutely new to svn. Just installed it in a linux based server (ubuntu). Can you pls advice how to go about it? Thanks -Vivek

    Read the article

  • Is there a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the .NET graphics world?

    - by Rao
    For the past 5 months or so, I've spent time learning C# using Andrew Troelsen's book and getting familiar with stuff in the .NET 4 stack... bits of ADO.NET, EF4 and a pinch of WCF to taste. I'm really interested in graphics development (not for games though), which is why I chose to go the .NET route when I decided choose from either Java or .NET to learn... since I heard about WPF and saw some sexy screenshots and all. I'm even almost done with the 4 WPF chapters in Troelsen's book. Now, all of a sudden I saw some post on a forum about how "WPF was dead" in the face of something called Silverlight. I searched more and saw all the confusion going on at present... even stuff like "Silverlight is dead too!" wrt HTML5. From what I gather, we are in a delicate period of time that will eventually decide which technology will stabilize, right? Even so, as someone new moving into UI & graphics development via .NET, I wish I could get some guidance from people more experienced people. Maybe I'm reading too much? Maybe I have missed some pieces of information? Maybe a path exists that minimizes tears of blood? In any case, here is a sample vomiting of my thoughts on which I'd appreciate some clarification or assurance or spanking: My present interest lies in desktop development. But on graduating from college, I wish to market myself as a .NET developer. The industry seems to be drooling for web stuff. Can Silverlight do both equally well? (I see on searches that SL works "out of browser"). I have two fair-sized hobby projects planned that will have hawt UIs with lots of drag n drop, sliding animations etc. These are intended to be desktop apps that will use reflection, database stuff using EF4, networking over LAN, reading-writing of files... does this affect which graphics technology can be used? At some laaaater point, if I become interested in doing a bit of 3D stuff in .NET, will that affect which technologies can be used? Or what if I look up to the heavens, stick out my middle finger, and do something crazy like go learn HTML5 even though my knowledge of it can be encapsulated in 2 sentences? Sorry I seem confused so much, I just want to know if there's a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the graphics world.

    Read the article

  • Dev efforts for different mobile platforms

    - by Juriy
    Hello guys, I'm in the middle of development of a client-server "socializing" that is supposed to run on several mobile devices. The project is pretty complex, involving networking, exchanging media, using geolocation services, and nice user UI. In terms of development efforts, technical risks and extensibility what is the best platform to start with? Taking into the account that the goal is go "live" as fast as possible with the mobile version. And second goal is to cover most users (but first is more important). iPhone (iPod iPad) Android BlackBerry Java ME, Symbian I realize that there are limitations on every platform, and there are different aspects to take into the account (for example iPhone has better developer's community then Android, J2ME runs in a terrible sandbox but covers most devices). Please share your pros and cons. I have the experience only with J2ME, unfortunately I can't evaluate other platforms.

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >