Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 640/1332 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Fixed Assets Recommended Patch Collections

    - by Cindy A B-Oracle
    After the introduction of the Recommended Patch Collections (RPCs) in late 2012, Fixed Assets development has released an RPC about every six months.  You may recall that an RPC is a collection of recommended patches consolidated into a single, downloadable patch, ready to be applied.  The RPCs are created with the following goals in mind: Stability:  Address issues that occur often and interfere with the normal completion of crucial business processes, such as period close--as observed by Oracle Development and Global Customer Support. Root Cause Fixes:  Deliver a root cause fix for data corruption issues that delay period close, normal transaction flow actions, performance, and other issues. Compact:  While bundling a large number of important corrections, the file footprint is kept as small as possible to facilitate uptake and minimize testing. Reliable:  Reliable code with multiple customer downloads and comprehensive testing by QA, Support and Proactive Support.  There has been a revision to the RPC release process for spring 2014.  Instead of releasing product-specific RPCs, development has released a 12.1.3 RPC that is EBS-wide.  This EBS RPC includes all product-recommended patches along with their dependencies. To find out more about this EBS-wide RPC, please review Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 1 (RPC1) (Doc ID 1638535.1).

    Read the article

  • Virtual Developer Day: MySQL - July 31st

    - by Cassandra Clark - OTN
    Virtual Developer Day: MySQL is a one-stop shop for you to learn all the essential MySQL skills. With a combination of presentations and hands-on lab experience, you’ll have the opportunity to practice in your own environment and gain more in-depth knowledge to successfully design, develop, and manage your MySQL databases.This FREE virtual event has two tracks tailored for both fresh and experienced MySQL users. Attend the sessions on July 31st and sharpen your skills to: Develop your new applications cost-effectively using MySQL Improve performance of your existing MySQL databases Manage your MySQL environment more efficiently When? Wednesday, July 31, 2013Mumbai 10:30 a.m. (GMT +5:30) - 2:30 p.m.Singapore 1:00 p.m. (GMT +8:00) - 5:00 p.m.Sydney 3:00 p.m. (GMT +10:00) - 7:00 p.m. Register TODAY! 

    Read the article

  • Groovy/Grails course content

    - by Don
    Hi, Some Java developers have asked if I could give them a 2-day primer on Grails development. I'm assuming they're familiar with: Java language and libraries Java web development, e.g. Servlets, JSPs Spring Hibernate Client-side development, CSS, HTML, JavaScript I'm further assuming they have no experience with Groovy or Grails. AFAIK, the app that they'll be building is a new project, so there's no need to cover topics like using GORM with a legacy database. I'm trying to decide how I should structure the course, e.g. what topics to cover and how much time to spend on each. I reckon about 1/2 - 3/4 days on Groovy and the rest of the time on Grails would be adequate. I'll probably use the Groovy console to demonstrate the Groovy language concepts and a simple Grails app for explaining the conventions and structure of a Grails project. If anyone has a list of Groovy/Grails topics that I should cover, or even an outline of a similar course that they've given/taken, I'd be very grateful. Naturally, I will credit for any resources that I use during the course.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • HOW TO RECOVER A WWW DIRECTORY AND INCLUDED FIELS IN UBUNTU 9.04

    - by Al Mubarak
    hai., i'm using ubuntu 9.04 for drupal development. today morning accidentally i removed my www folder in directory. the folder has so many of my web development documents. O God., I just restart after my system when it happens., and i install some recovery software like gpart. is theri any possibilities to recover my www directory and files., bcos its includes more of web development documents. pls pls pls i'm very afraid about that issue. let me know asap. Thanks in Many more advance,

    Read the article

  • Enterprise Manager Extensibility Exchange – Version 1.1 Now Available!

    - by Joe Diemer
    Since its announcement at Oracle OpenWorld 2012, the Enterprise Manager Extensibility Exchange is becoming the source to access Enterprise Manager entities, including plug-ins, connectors, deployment procedures, assemblies, templates, and more.  Based on feedback, the Exchange has recently been updated so Enterprise Manager administrators can find and access Oracle and partner-built plug-ins and connectors easier. The Exchange enables anyone to contribute an Enterprise Manager entity through the “Contribute” tab, where information about the entity is captured and placed on the Exchange once it is approved.  The Exchange encourages comment through the Enterprise Manager Forum.  An Oracle partner can build a plug-in by accessing the Extensibility Development Kit (EDK) found at the Development Resources tab.  Oracle partners and customers can can also engage a partner that has built its practice specializing in plug-in development and deployment.  One of those partners is Blue Medora, which has effectively used the EDK to build plug-ins to manage non-Oracle targets.  Next week Blue Medora will be a "Guest Blogger" and tell a great story about heterogeneous datacenter management.Partners can also have their plug-ins validated through the Oracle Validated Integration (OVI) program.  NetApp is an example of a partner that recently built an Enterprise Manager plug-in and has validated it through the program.  Check back here in two weeks for their blog post describing the value of an Enterprise Manager "OVI" plug-in as well as discuss specifics the NetApp storage plug-in.  Check out the NetApp Enterprise Manager Validated Integration datasheet in the meantime. The Enterprise Manager Exchange is located at http://www.oracle.com/goto/EMExtensibility. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • How to set Qwt path to the run-time linker in Xubuntu

    - by Rahul
    I've successfully installed Qwt in Xubuntu 12.04(qmake - make - make install). But now I need to set the Qwt path to run time linker of Xubuntu. In manual it's given like - If you have installed a shared library it's path has to be known to the run-time linker of your operating system. On Linux systems read "man ldconfig" ( or google for it ). Another option is to use the LD_LIBRARY_PATH (on some systems LIBPATH is used instead, on MacOSX it is called DYLD_LIBRARY_PATH) environment variable. But being newbie to Linux environment, I'm not able to proceed further. Please help me with this.

    Read the article

  • Gmail flagging emails as spam despite SPF being enabled and working perfectly

    - by Asif
    I have a website where people can recommend contents to their friends using their email. The issue is that emails are being flagged as spam whereas if I do the same from my development machine things are working out fine. I have enabled SPF and it is perfect. When sending through website, the email appears as this in Gmail Inbox: From [email protected] to [email protected]. When I send it from my development machine it appears as : From xyz.com via mywebsite.com to [email protected] mailed by mywebsite.com and this is exactly how I envisioned it. From what little I could figure out by looking at the source of emails in Gmail is that when sending from my development machine Gmail correctly recognizes my domain as mywebsite.com for which SPF is enabled and hence it treats it as genuine email. Whereas Gmail recognizes my domain as [email protected] when sent through the website. Can someone tell me why it does so? Any help would be really appreciated.

    Read the article

  • Immersive UX Changing the Face of Retail

    Changing the Face of Retail is an article Ive been thinking about most of the past couple weeks. I think my goal with the article is to one talk about how technology built into the retail environment can be used to build better experiences for customers and 2 to talk about how this kind of evolutionary extension of the retail environment is better for customers AND retailers.I walked into the Microsoft Retail Store or at least one of them, (see one at Mission Vejo or Scottsdale) and its really impressive...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Can't boot Ubuntu 12.10 graphics problem

    - by Frantumn
    I can't boot since installing Ubuntu 12.10 When I try to run Ubuntu My computer never gets to the Ubuntu screen with the loading dots. I tried to run in recovery mode with safe graphics (failsafex) When I do this a message pops up saying "the system is running in low graphics mode", If I click okay I am asked what would I like to do and am given four options. I tried running low graphics for one session and then a message appears with a progress bar and says standby one minute while the display restarts. The progress bar never moves and if I click okay the whole process just restarts. I Don't know what to do from here I can't get into the OS. I'm not sure whether the problem is related to compatibility with my laptop monitor or my graphic card nvidia360m I had to install using a safe graphics mode. To learn about how I installed see this link. This link also has information on my computer hardware. Can't install Ubuntu since 10.10 ----UPDATE--- I was able to get into a desktop environment By installing Nvidia-current however it is messy. I have a screen and I am able to see my desktop however there is no unity bar and none of the keyboard controls work. I can right click and create a folder on the desktop and then I can see inside that folder in a traditional browser window. There is still no top menu or unity bar. When I boot normally I don't get into the desktop environment and I get this message in tty 'GPU lockup switching to software FBCON' Okay, I've played around with tips the pages from comments. I've been able to consistantly get into a safemode desktop environment using the xorg & nouveau drivers. I've tried switching between the 5 different options in the Additional Drivers tab in Software Sources. The nVidia (proprietary, tested) driver gets beyond the GPU lockup on a normal boot and actually gets into a Desktop. The issue is then that there is no Unity bar, or top screen menu bar and the resolution is very low. I've tried switching to the (prop, tested) driver and then reinstalling Unity and Ubuntu-Desktop but that didn't work either.

    Read the article

  • Microsoft releases Visual Studio 2010 SP1

    - by brian_ritchie
    Microsoft has been beta testing SP1 since December of last year.  Today, it was released to MSDN subscribers and will be available for public download on March 10, 2011.The service pack includes a slew of fixes, and a number of new features: Silverlight 4 supportBasic Unit Testing support for the .NET Framework 3.5Performance Wizard for SilverlightIntelliTrace for 64-bit and SharePointIIS Express supportSQL CE 4 supportRazor supportHTML5 and CSS3 support (IntelliSense and validation)WCF RIA Services V1 SP1 includedVisual Basic Runtime embeddingALM Improvements Of all the improvements, IIS Express probably has the largest impact on web developer productivity.  According to Scott Gu, it provides the following:It’s lightweight and easy to install (less than 10Mb download and a super quick install)It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, Media Support, and all other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.Good stuff indeed.  This will make our lives much easier.  Thanks Microsoft...we're feeling the love!  

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • Deciding on a company-wide javascript strategy [on hold]

    - by drogon
    Our company is moving most of its software from thick-client winforms apps to web apps. We are using asp.net mvc on the server side. Most of the developers are brand new to the web and need to become efficient and knowledgeable at writing client-side web code (javascript). We are deciding on a number of things and would appreciate feedback on the following: Angular.js or Backbone.js? Backbone (w/ Underscore) is certainly more light weight, but requires more custom development. Angular seems to be a full-fledged framework, but would require everyone to embrace it and probably a longer learning curve(??). (Note: I know nothing about Angular at this point) Require.js or script includes w/ MVC bundleconfig? Require.js makes development "feel like" c# (importing namespaces). But, integrating the build/minification process can be a pain (especially the configuration). Bundling via mvc requires developers to worry more about which scripts to include but has less overall development friction. Typescript vs Javascript Regardless of frameworks, our developers are going to need to learn the basics. Typescript is more like c# and MAY be easier for c# developers to understand. However, learning TypeScript before javascript may hinder their mastery of javascript at the expense of efficiency.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • How mature is FreeBASIC?

    - by David
    A friend of mine is considering using FreeBASIC in a critical production environment. They currently use GWBasic, and they want to make a soft transition towards more modern languages. I am just worried that there might be undetected bugs in the software. I see that their version number is 0.22.0, which indicates, that it is not quite mature yet. I also read this discussion, without being able to conclude. Also on their Sourceforge pages there is no indication of whether it is Alpha or Beta (which anyways is not a very good indicator). Does anyone have own experience about the maturity, ideas on how to judge the maturity, or know of companies using FreeBASIC in a critical production environment?

    Read the article

  • Lead Programmer definition clarification

    - by Junaid
    I am working on PHP and MySQL based web application for more than 5 years now. I started my career from Intern - Jr Developer - Software Developer - Sr. Software Engineer [Team Lead] that's what I am nowadays. I was looking at the link at Wikipedia regarding who is a lead programmer. The link states the following: A lead programmer is a software engineer in charge of one or more software projects. Alternative titles include Development Lead, Technical Lead, Senior Software Engineer, Software Design Engineer Lead (SDE Lead), Software Manager, or Senior Applications Developer. When primarily contributing in a high-level enterprise software design role, the title Software Architect (or similar) is often used. All of these titles can have different meanings depending on the context. My current job responsibilities are more or less like a Development Lead and to some extent near Software Architect because I usually design the core structure of new products and managing 2-3 project simultaneously and in the meantime involved in assisting other teams regarding the structural design of their projects, I am usually on call with clients along with project managers, I code most of the time when my team stuck somewhere / workload / integrating some third party API and etc. Primary reason of this writing is to know if I qualify for a Development Lead Title? in accordance with my above mentioned job descriptions?

    Read the article

  • Juju LXC configuration

    - by Preethi
    I've looked at this post (http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage) for setting up juju on a local environment with lxc. However, is there a way to use juju with lxc in a non-local environment? I am looking at a scenario where lxc containers are deployed on multiple nodes. I.e., lets say I have virtual machines m1 and m2 with wordpress deployed on a container in m1 and mysql deployed on a container in m2. Is there a way to orchestrate these deployments with juju?

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >