Search Results

Search found 10149 results on 406 pages for 'acceptance testing'.

Page 244/406 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • MSDN Live 2010 &ndash; Delivered : 24 sessions (4 x 6) on Visual Studio and Team Foundation Server

    - by terje
    We (Mikael Nitell and me) got a whole track on the Norwegian MSDN Live tour this year.  We did these as a pair, and covered 4 cities over 4 days, 6 sessions per day, taking 8 hours to come through it.  The Islandic volcano made the travels a bit rough, but we managed 6 flights out of 8. The first one had to go by van instead, 7-8 hour drive each way together with other MSDN Live presenters – a memorable tour! Oslo was the absolute top point.  We had to change hall to a bigger one. People were crowding, and even the big hall was packed!  The presentations were mostly based on demos, but we had a few slides as well.  They have been uploaded to my SkyDrive.  Info to aliens – some of the text may be Norwegian. The sessions were as follows: Overview of news in Visual Studio and Team Foundation server 2010 Ensuring Quality with VS/TFS 2010 Releasing products with VS/TFS 2010 No More No Repro with VS/TFS 2010 Performance Testing and Parallel Programming with VS/TFS 2010 Migrating to VS/TFS 2010 Tips, tricks, news and some best practices with VS/TFS 2010   In the coming days, I will post up examples from the demos too, with explanations of how they are intended to work. These entries will also contain stuff we had to remove from the actual presentations due to the time constraints. We managed to create recordings of two of the sessions, which will be uploaded to Channel 9 by Microsoft, afaik.   I will update this blog with information about exact locations when that is done. Also note we’re (read:Osiris Data AS) running both Upgrade and Deep Dive courses  on VS/TFS 2010 now in May.  Please look here for more info. If you want to be informed, follow me on Twitter.  All blog entries will be announced on twitter.

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Does Agile force developers to work more?

    - by Shooshpanchick
    Looking at common Agile practices it seems to me that they (intentionally or unintentionally?) force developer to spend more time actually working as opposed to reading blogs/articles, chatting, coffee breaks and just plain procrastinating. In particular: 1) Pair programming - the biggest work-forcer, just because it is inconvenient to do all that procrastination when there are two of you sitting together. 2) Short stories - when you have a HUGE chunk of work that must be done in e.g. a month, it is pretty common to slack off in the first three weeks and switch to OMG DEADLINE mode for the last one. And with the little chunks (that must be done in a day or less) it is exact opposite - you feel that time is tight, there is no space for maneuvering, and you will be held accountable for the task pretty soon, so you start working immediately. 3) Team communication and cohesion - when you underperform in a slow, distanced and silent environment it may feel ok, but when at the end of the day at Scrum meeting everyone boasts what they have accomplished and you have nothing to say you may actually feel ashamed. 4) Testing and feedback - again, it prevents you from keeping tasks "99% ready" (when it's actually around 20%) until the deadline suddenly happens. Do you feel that under Agile you work more than under "conventional" methodologies? Is this pressure compensated by the more comfortable environment and by the feeling of actually getting right things done quickly?

    Read the article

  • Java SE 8 (with JavaFX) Developer Preview Release for ARM

    - by Roger Brinkley
    In an effort to get ARM developers testing Java SE 8 before the scheduled release later this year a Java SE 8 Developer Preview Release for ARM has been made available. This release has been tested on the Raspberry PI but should work on other ARM platforms. In addition to the new Java SE features, this release provides specific support of hard float GPU on the Raspberry PI. The support for hard float GPU has been anticipated by a number of developers. Additionally, this release includes support of an optimized JavaFX. Specific configurations of JDK 8 on ARM are defined below: Java FX is supported on ARM architecture v6/7 (hard float) Supported platforms without Java FX: ARM architecture v6/7 (hard float) ARM architecture v7 (VFP, little endian) ARM architecture v5 (soft float, little endian) Linux x86 The download page includes setup instructions for a Raspberry PI device as well as demos and samples. Developers are also encouraged to try their own applications as well and to share their stories via the JavaFX or Project Feedback Forums.  If you've got a Raspberry PI or other ARM devices it's time to get started with Java SE 8 Developer Preview release.

    Read the article

  • SOA performance on SPARC T5 benchmark results

    - by JuergenKress
    The brand NEW super fast SPARC T5 servers are available. The platform is superb to run large SOA Suite environments or to consolidate your whole middleware platform. Some performance advices, recommended for all workloads: Performance profile for SOA apps on Oracle Solaris 11 BPEL (Fusion Order Demo) instances per second OSB (messages / transformations per second) Crypto acceleration study for SOA transformations SPARC T4 and T5 platform testing, pre-tuning Performance suitable for mid-to-high range enterprise in stand-alone SOA deployment or virtualized consolidation environment shared with Oracle applications 2.2x to 5x faster than SPARC T3 servers 25% faster SOA throughput, core to core than Intel 5600-series servers (running Exalogic software) SPARC T5 has 2x the consolidation density of Intel 5600-class processors 2x faster initial deployment time using Optimized Solutions pre-tested configuration steps Over 200 Application adapters for easiest Oracle software integration Would you like to get details? We can share with you on 1:1 bases T5 SOA Suite performance benchmarks, please contact your local partner manager or myself! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: T5,TS Sparc,T5 SOA,bechmark,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • How can I triple boot Xubuntu, Ubuntu and Windows?

    - by ag.restringere
    Triple Booting Xubuntu, Ubuntu and Windows I'm an avid Xubuntu (Ubuntu + XFCE) user but I also dual boot with Windows XP. I originally created 3 partitions and wanted to use the empty one as a storage volume but now I want to install Ubuntu 12.04 LTS (the one with Unity) to do advanced testing and packaging. Ideally I would love to keep these two totally separate as I had problems in the past with conflicts between Unity and XFCE. This way I could wipe the Ubuntu w/ Unity installation if there are problems and really mess around with it. My disk looks like this: /dev/sda1 -- Windows XP /dev/sda2 -- Disk /dev/sda: 200.0 GB, 200049647616 bytes 255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 78139454 39069696 7 HPFS/NTFS/exFAT /dev/sda2 78141440 156280831 39069696 83 Linux /dev/sda3 156282878 386533375 115125249 5 Extended /dev/sda4 386533376 390721535 2094080 82 Linux swap / Solaris /dev/sda5 156282880 386533375 115125248 83 Linux Keep each in it's own partition and totally separate and be able to select from each of the three systems from the GRUB boot menu... sda1 --- [Windows XP] sda2 --- [Ubuntu 12.04] "Unity" sda3(4,5) -- [Xubuntu 12.02] "Primary XFCE" What is the safest and easiest way to do this without messing my system up and requiring invasive activity?

    Read the article

  • Questioning one of the arguments for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

    Read the article

  • OPN Exchange @ OpenWorld - Don't Overlook TestFest!

    - by Get_Specialized!
    As part of the Oracle PartnerNetwork Exchange @ OpenWorld conference a “Test Fest” will be taking place from Monday, October 1st - Thursday, October 4th 2012 at the Marriott Marquis Hotel. Were you requested by management to get the most out of your OOW experience and expense?  Looking for something that will help you or a team member get approved by your management to attend this year? Seeking a way to justify having your technical expert out to OOW to help you close a deal?  Then take advantage of training available onsite for your staff or yourself during Oracle OpenWorld; no matter if you are primarily coming to staff a booth, present a session, meet with customers, or attend the OPN PartnerNetwork Exchange. With limited seating available, its adviseable to pre-register today to: Get recognized for your skills, with an OPN Specialist accreditation Take exams that are free of charge for Oracle PartnerNetwork Exchange attendees Help your company get Specialized in a higher level of the OPN Program Get a list of exams , study materials and pre-register using the Schedule Builder tool to reserve a seat in one of the 10 sessions offered on a first come, first serve basis. Remember upon arrival to the testing room, you will need to show proof of valid OPN Membership and have your valid Pearson Vue account ID. For any questions, email the OPN Communications team. Test Fest Schedule Date Session 1 Session 2 Session 3 Monday - October 1 10:30 AM - 12:30 PM 1:00 PM - 3:00 PM 3:30 PM - 5:30 PM Tuesday - October 2 10:00 AM - 12:00 PM 12:30 PM - 2:30 PM Wednesday - October 3 10:30 AM - 12:30PM 1:00 PM - 3:00 PM 3:30 PM - 5:30PM Thursday - October 4 11:00 AM - 1:00 PM 1:30 PM- 3:30 PM Look forward to meeting you at the Oracle PartnerNetwork Exchange

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • Sangam 13: Hyderabad, India

    - by mvaughan
    by Teena Singh, Oracle Applications User Experience The AIOUG (All India Oracle User Group) will be hosting Sangam 13 November 8th and 9th in Hyderabad, India. The first Sangam conference was in 2009 and the AppsUX team has been involved with the conference and user group membership since 2011. We are excited to be returning to the conference and meeting Oracle end users there. For the first time at Sangam the AppsUX team will host an Onsite Usability Lab at the conference. If you or one of your team members is attending the conference and interested in attending a pre-scheduled one on one usability session, contact [email protected]. In addition to pre-scheduled sessions in the Onsite Usability Lab, our team will also be hosting Walk In studies.  Whether you have 5 minutes, 15 minutes, or half an hour, you can experience a one on one demo learn more about how user testing is conducted with a UX expert. Additionally, you can learn how you and your company can participate in future design and user research activities. The AppsUX team will also be available at the Oracle booth in the Demo area if you want to ask questions. Finally, you can learn how simplicity, consistency, and emerging trends are driving the applications user experience strategy at Oracle when you attend Thomas Wolfmaier's (Director of SCM User Experience, Oracle) presentation on: Applications User Experiences In the Cloud: Trends and Strategy,  November 8th, 2013. For further information on our team’s involvement in the conference, please refer to the events page on Usable Apps here.

    Read the article

  • Is Scala ready for prime time?

    - by jayraynet
    Now that I've done a few trivial things with Scala (which I love for "hello world" and contrived applications!) I am left wondering.. part about maturity of the tools to support development, and part about general applicability. Are the toolsets ready? Is Scala appropriate for use on enterprise / business applications? Would "you" use it on a non-trivial project? Some of my (possibly unfounded) concerns would be: are the IDE and toolsets as rich as what we have to develop .net and java applications (eclipse for Scala seems limited compared to eclipse for java)? are the build / CI / testing toolsets able to effectively deal with Scala? how maintainable is the concise code that can be (encouraged?) written in the language? is it possible to find developers with Scala experience? is there enough critical mass to get help through on-line reference and books that are more than "intro" to the language? So bottom line - is the ecosystem mature enough to use now, or better off waiting to see how it evolves? EDIT: let's say "non-trivial" is a multi-year, multi-release, 10-20 developers project.

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • Dependency Injection and method signatures

    - by sunwukung
    I've been using YADIF (yet another dependency injection framework) in a PHP/Zend app I'm working on to handle dependencies. This has achieved some notable benefits in terms of testing and decoupling classes. However,one thing that strikes me is that despite the sleight of hand performed when using this technique, the method names impart a degree of coupling. Probably not the best example -but these methods are distinct from ... say the PEAR Mailer. The method names themselves are a (subtle) form of coupling //example public function __construct($dic){ $this->dic = $dic; } public function example(){ //this line in itself indicates the YADIF origin of the DIC $Mail= $dic->getComponent('mail'); $Mail->setBodyText($body); $Mail->setFrom($from); $Mail->setSubject($subject); } I could write a series of proxies/wrappers to hide these methods and thus promote decoupling from , but this seems a bit excessive. You have to balance purity with pragmatism... How far would you go to hide the dependencies in your classes?

    Read the article

  • 3D space game development

    - by user1693061
    I want to develop a 3D game (sci-fi type with spaceships) which can be played on multiplayer mode and by multiplayer i mean around 10 players for start as it will be a personal testing project and mostly educational. I have been searching for some days about the available languages and engines but i am kinda confused. Since i have been learning Java for my 1st year in I.T university and i have pretty good understanding i thought i would go with the Java language and develop that game on an applet so it could be played on a browser. After going through an applet game tutorial i understood how graphics work on an applet. So.. 1st question: Could an applet carry the burden of a 3D game especially on multiplayer? My thinking: It's a space game so the graphics should not be such a big problem since it wont be that crowded with entities apart from ships, planets and some effects. If the java applet is not the way for my project i would't mind "developing it on desktop"(i mean not making it a browser game). 2nd question: Should i use Unity engine for my purpose(space game)? If not name other language/engine combo.

    Read the article

  • A few tips on deploying Secure Enterprise Search with PeopleSoft

    - by Matthew Haavisto
    Oracle's Secure Enterprise Search is part of PeopleSoft now.  It is provided as part of the Peopltools platform as an appliance, and is used with applications starting with release 9.2.  Secure Enterprise Search is a rich and powerful search product that can enhance search and navigation in PeopleSoft applications.  It also provides useful features like facets and filtering that are common in consumer search engines.Several questions have arisen about the deployment of SES and how to administer it and insure optimum performance.  People have also asked about what versions are supported on various platforms.  To address the most common of these questions, we are posting this list of tips.Platform SupportSES 11.1.2.2 does not support some of the platforms supported by PeopleTools, such as Windows 2012 and AIX 7.1. However, PeopleSoft and SES can use different operating system platforms when SES is deployed on a separate machine.SES 11.2.2.2 will have the required platform support for PT 8.53 in the future. We are planning to certify PT 8.53 once the testing is complete in 8.54 development and all platform support is released for 11.2.2.2.ArchitectureWe recommend running SES on a separate machine (from your apps) for two reasons:1.    SES bundles specific WebLogic, Java, and Oracle DB versions and might need different OS patches at a minimum than PeopleSoft. By having SES run on a different machine, these pre-requisites can be managed better through their lifecycle independenly for PeopleSoft and SES.2.    SES is resource intensive - it runs it's own WebLogic and Oracle database. By having SES run on its own machine, sufficient resources can be allocated to SES and free the PeopleSoft servers from impacts of SES load patterns.We will be providing a comprehensive red paper covering PeopleSoft/SES administration in the near future, but until that is published, we'll post tips on this blog.

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • I seem to be missing a few important concepts with PhoneGap

    - by garethdn
    I'm planning on developing an app on multiple platforms and I'm thinking that PhoneGap might be perfect for me. I had been reading that it's one codebase for all platforms but looking at the PhoneGap guide it seems there are separate instructions for each platform. So if i want to develop for iOS, Android, BB and WP7 I need to write 4 different sets of code? I'm sure i'm missing something fundamental here. Aside from that, how do people usually approach a PhoneGap build? You obviously / probably want the finished app to look like a native app - is it more common than not to use jQuery Mobile together with PhoneGap? Is there a preferred IDE? I see, in the guide, for iOS they seem to suggest Xcode. I'm fine using Xcode but it seems a bit overkill for HTML & CSS. Do I need to develop in Xcode and if not how do i approach it? Use a different IDE / Text Editor and then copy paste into Xcode for building and testing? I know this question is long-winded and fundamental but it something which i don't think is properly addressed in the guides. Thanks.

    Read the article

  • How to promote an open-source project?

    - by Shehi
    First of all, I apologize if this is the wrong section of network to post this question. If it is, please feel free to move it to more appropriate location... Question: I would like to hear your ideas regarding the ways of open source projects being started and run. I have an open-source content management system project and here some questions arise: How should I act? Shall I come up with a viable pre-alpha edition with working front- and back-ends first and then announce the project publicly? Or shall I announce it right away from the scratch? As a developer I know that one should use versioning system like Git or SVN, which I do, no problems there. And the merit of unit-testing is also something to remember, which, to be frank, I am not into at all... Project management - I am a beginner in that, at best. Coding techniques and experiences such as Agile development is something I want to explore... In short, any ideas for a developer who is new to open-source world, is most welcome.

    Read the article

  • Am I programming too slow?

    - by Jonn
    I've only been a year in the industry and I've had some problems making estimates for specific tasks. Before you close this, yes, I've already read this: http://programmers.stackexchange.com/questions/648/how-to-respond-when-you-are-asked-for-an-estimate and that's about the same problem I'm having. But I'm looking for a more specific gauge of experiences, something quantifiable or probably other programmer's average performances which I should aim for and base my estimates. The answers range from weeks, and I was looking more for an answer on the level of a task assigned for a day or so. (Note that this doesn't include submitting for QA or documentations, just the actual development time from writing tests if I used TDD, to making the page, before having it submitted to testing) My current rate right now is as follows (on ASP.NET webforms): Right now, I'm able to develop a simple data entry page with a grid listing (no complex logic, just Creating and Reading) on an already built architecture, given one full day's (8 hours) time. Adding complex functionality, and Update and Delete pages add another full day to the task. If I have to start the page from scratch (no solution, no existing website) it takes me another full day. (Not always) but if I encounter something new or haven't done yet it takes me another full day. Whenever I make an estimate that's longer than the expected I feel that others think that I'm lagging a lot behind everyone else. I'm just concerned as there have been expectations that when it's just one page it should take me no more than a full day. Yes, there definitely is more room for improvement. There always is. I have a lot to learn. But I would like to know if my current rate is way too slow, just average, or average for someone no longer than a year in the industry.

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Software Manager who makes developers do Project Management

    - by hdman
    I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is "just writing the design down", it shouldn't take too long, and "all the code should be written before the hardware is ready". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do?

    Read the article

  • How do you track existing requirements over time?

    - by CaptainAwesomePants
    I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >