Search Results

Search found 9233 results on 370 pages for 'building'.

Page 188/370 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Kent .Net/SqlServer User Group – Upcoming events

    - by Dave Ballantyne
    At the Kent user group we have two upcoming events.  Both are to be held at F-Keys Training suite http://f-keys.co.uk/ in Rochester, Kent. If you haven’t attended before please note the location here. 14-June Is your code S.O.L.I.D ? Nathan Gloyn Everybody keeps on about SOLID principles but what are they? and why should you care? This session is an introduction to SOLID and I'll aim to walk through each principle telling you about that principle and then show how a code base can be refactored using the principles to make your life easier, Come the end of the session you should have a basic understanding of the principle, why to use it and how using it can improve your code. Building composite applications with OpenRasta 3 Sebastien Lambla A wave of change is coming to Web development on .NET. Packaging technologies are bringing dependency management to .NET for the first time, streamlining development workflow and creating new possibilities for deployment and administration. The sky's the limit, and in this session we'll explore how open frameworks can help us leverage composition for the web. Register here for this event http://www.eventbrite.com/event/1643797643 05-July Tony Rogerson Achieving a throughput of 1.5Terabytes or over 92,000 8Kbyte of 100% random reads per second on kit costing less that 2.5K, and of course what to do with it! The session will focus on commodity kit and how it can be used within business to provide massive performance benefits at little cost. End to End Report Creation and Management using SQL Server Reporting Services  Chris Testa-O'NeillThis session will walk through the authoring, management and delivery of reports with a focus on the new features of Reporting Services 2008 R2. At the end of this session you will understand how to create a report in the new report designer. Be aware of the Report management options available and the delivery mechanisms that can be used to deliver reports. Register here for this event http://www.eventbrite.com/event/1643805667 Hope to see you at one or other ( or even both if you are that way inclined).

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • Keeping game model and graphics/animation separate but in sync

    - by AJM
    Suppose I'm building a chess game where I want to have animations. Pieces glide to their new squares when moved. Pieces perform attack animations when capturing other pieces. I'm not sure how to effectively separate the data and logic needed for these animations and the actual game model (in the MVC sense). The pieces themselves should ideally not have to worry about their pixel coordinates or current animation frame. At the same time, many changes to the model are effectively driven by animations. A moved piece changes its position after (before?) its sprite is done gliding. A piece is removed from the board after the capturing piece is finished its attack animation. How would you suggest I manage the game model, the graphics and animations, and their relationships? For example, where would the animations "live"? How would animations be created and managed in response to player moves? How would animations drive updates to the game model, or how would the game model drive animations?

    Read the article

  • Ubuntu 12.10 unmet dependencies

    - by John
    I have Ubuntu 12.10 with 3.2.0.24-generic #39-Ubuntu running on a Dell Inspiron 700m laptop. I have unmet dependencies as follows: root@John-700m:/home/John# sudo apt-get remove --purge linux-image-3.5.0-{18,27,31,34}-generic Reading package lists... Done Building dependency tree Reading state information... Done Package 'linux-image-3.5.0-18-generic' is not installed, so not removed Package 'linux-image-3.5.0-27-generic' is not installed, so not removed Package 'linux-image-3.5.0-31-generic' is not installed, so not removed Package 'linux-image-3.5.0-34-generic' is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-image-generic but it is not going to be installed linux-image-extra-3.5.0-18-generic : Depends: linux-image-3.5.0-18-generic but it is not going to be installed linux-image-extra-3.5.0-27-generic : Depends: linux-image-3.5.0-27-generic but it is not going to be installed linux-image-extra-3.5.0-31-generic : Depends: linux-image-3.5.0-31-generic but it is not going to be installed linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I tried to remove the packages, but I still get the same error message. Any help would certainly be appreciated. @Eric, well noted. Here is the reults: model name : Intel(R) Pentium(R) M processor 1.60GHz flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 Thanks again. @Eric, If I need to downgrade to Ubuntu 12.04 from 12.10, can I do it without having to mess my current partition or files? I also have Windows XP running on the machine. Best. @Eric, if you are online, I could use your help (or anybody's else). I installed the package for non-pae unit and I still get the same error. Any suggestions? Thanks

    Read the article

  • Oracle Retail Industry Forum Europe 2014 – Registration Now Open!

    - by Marie-Christin Hansen-Oracle
    We are delighted to announce that registration for the 4th annual Oracle Retail Industry Forum Europe (ORIF Europe) is now open. The event is being held from 10-11 September at the Renaissance St Pancras Hotel in London. ORIF Europe is a must attend event for Oracle Retail customers, retailers who are about to embark on an Oracle implementation, or for those who simply wish to learn more about Oracle Retail solutions and how they support the provision of commerce anywhere. Further details will be announced over the coming weeks, but already confirmed as speakers are: Paul Hornby, Head of eCommerce at Shop Direct, who will discuss the company’s ambitions, challenges faced and the strategy undertaken by the team in driving the business from a catalogue-based to a web-based commerce business. The session will reveal how Shop Direct and Oracle Retail are working together to achieve the transformation of this business into a world-class digital retailer, by building a foundation for future growth for each of its individual brands and target markets. Kate Ancketill, CEO and Founder of GDR Creative Intelligence who will illustrate what best-in-market 'Access Anywhere' retail looks like. From individual retail and next generation personalisation of in-store service, to the land grab for delivery innovation, cutting edge brands are 'training' consumers to check into stores in exchange for concrete benefits. Kate will explore the opportunity this is opening up across the retail landscape. Register for the Oracle Retail Industry Forum today to secure your place.

    Read the article

  • Apache2 + Php + Pthreads HowTos

    - by Drug
    04 LTS 64 bit. What I would really love to do is sudo apt-get install libapache2-mod-php5 but compile PHP with --enable-maintainer-zts so I could later install pthreads with pecl install pthreads. Sadly I understand that it is not possible. I know that the easiest way is to recompile PHP together with apache support and zts. However I really like the way the standard Ubuntu PHP package is configured and I am used to the path`s for CLI php.ini config, Apache php.ini config and other paths for modules and files that this Ubuntu package defines. So i just want to change the package source a little bit and install it. # Get the stuff necessary to build the package sudo apt-get build-dep php5-common # Get the package source sudo apt-get source php5-common At this point I am getting sources not for the php5-common package but the whole php5 package. If I would sudo make && make install at this point, would it mean that I am installing a lot of unnecessary stuff? # Add configuration options ./configure --enable-maintainer-zts Does this mean that I am appending a configuration option? Or am I generating a whole new config? Alternative at this point Is there a way of getting the config options that this package defines, so that I can grab a php source from php.net and compile it with $ ./configure --prefix=package_prefix \ // Option 1 from package --enable-embed \ // Option 2 from package --with-regex=php \ // Option 3 from package Continuing the main idea ... Solution 1 # Compile (Not compiling) sudo make && make install Will I be building PHP with EVERYTHING at this point? If I compile like this, I will not be able to remove the mess I made using sudo apt-get purge php5? Solution 2 # ReCompile the package dpkg-buildpackage -rfakeroot -uc -b This does not compile also. Please correct my steps, so I can install everything correctly.

    Read the article

  • Package manager borked with gforge

    - by Leif Andersen
    I've been having a problem with the package manager. I seemed to have installed gforge, partially, but it keeps giving me errors whenever I install something. (Note that the thing I'm trying to install actually does get installed, but there is always an error returned). Here it is: Creating /etc/gforge/httpd.conf Creating /etc/gforge/httpd.secrets Creating /etc/gforge/local.inc Creating other includes invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--configure): subprocess installed post-installation script returned error exit status 100 Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) When I try to remove it with: sudo apt-get purge gforge-common I get this: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: gforge-common* gforge-db-postgresql* 0 upgraded, 0 newly installed, 2 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 5,853kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 717305 files and directories currently installed.) Removing gforge-db-postgresql ... Replacing config file /etc/postgresql/8.4/main/pg_hba.conf with new version invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--purge): subprocess installed pre-removal script returned error exit status 100 Removing gforge-common ... Purging configuration files for gforge-common ... Processing triggers for man-db ... Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) And it complains until I do a: sudo apt-get install -f At which point gforge is re-installed. I'm out of ideas, does anyone else have any other ideas with what might be wrong, and more importantly, how I can fix it? Thank you.

    Read the article

  • CRM Evolution 2014: Mediocrity is the New Horrible in Customer Service

    - by Tuula Fai
    "Mediocrity is the new horrible in customer service," Blair McHaney, Gold's Gym Almost everyone knows that customers' expectations have risen. But, after listening to two days of presentations at CRM Evolution, I think it’s more accurate to say that customers' expectations have skyrocketed. Fortunately, most companies have gotten the message and are taking their customer service to a higher level. For those who've been hesitant to 'boldly go where their customer service organization has not gone before,' take heart. I’ve got some statistics that will encourage you to take those first few steps. Why should I change? By engaging customers online, ancestry.com achieved a 99.5% customer satisfaction score (CSAT) while improving retention and saving millions on greater efficiency, including a 38%-50% drop in inbound calls and emails.1 By empowering employees to delight customers, Gold’s Gym achieved a 77.5% Net Promoter Score (NPS) and 22% customer churn rate. No small feat when you consider the industry averages are 40% NPS and 45% churn.2 By adapting quickly to social media, brands like Verizon have benefited from social community members spending 2.5x-10x more than average customers.3 ‘The fierce urgency of now’ is upon us in customer service. You can take your customer service to a higher level! To find out more, click here CRM Evolution Customer Service Experience Footnotes: 1. Arvindh Balakrishnan, Is Your Customer Service Modern?2. Blair McHaney, Wire Your Organization with Customer Feedback3. Becky Carroll, The Power of Communities for Improving the Service Experience and Building Advocates

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • SteelSeries & JavaFX External Control via Tinkerforge

    - by Geertjan
    The first photo shows me controling a JavaFX chart by rotating a Tinkerforge device, while the other two are of me using code by Christian Pohl from Nordhorn, Germany, to show components from Gerrit Grunwald's SteelSeries library being externally controled by Tinkerforge devices: What these examples show is that you can have a robot (i.e., an external device), of some kind, that can produce output that can be visualized via JavaFX charts and SteelSeries components. For example, imagine a robot that moves around while collecting data on the current temperature throughout a building. That's possible because a temperature device is part of Tinkerforge, just like the rotating device and distance device shown in the photos above. The temperature data collected by the robot would be displayed in various ways on a computer via, for example, JavaFX charts and SteelSeries components. From there, reports could be produced and adjustments could be made to the robot while it continues moving around collecting temperature data. The fact that Tinkerforge has Wifi support makes a scenario such as described here completely possible to implement. And all of this can be programmed in Java, without very much work, since the Java bindings for Tinkerforge are simple to use, such as shown in yesterday's blog entry.

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • GL Expense Allocation in OPM Actual Costing

    - by Annemarie Provisero
    ADVISOR WEBCAST:  GL Expense Allocation in OPM Actual Costing PRODUCT FAMILY: Oracle Process Manufacturing     March 16, 2011 at 8 am PT, 9 am MT, 11 am ET This session is designed for customers and functional users, and discusses the setups necessary for expense allocation (flow of expenses from general ledger balances to Oracle Process Manufacturing Costing then back to GL, Examples include screen shots of test cases). TOPICS WILL INCLUDE: Concept of GL expense allocation in OPM. Situation where this functionality is more suitable. Setups needed to make this work. Examples with screen shots (Performed test cases). Known gaps in this functionality. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How do you manage the testing of your Android software on physical devices?

    - by Philip Regan
    I'm in charge of managing mobile application development at my company, and I am currently building a mobile device "library" for testing. Essentially, we want to have a representative device in-house for each of the OSes we are developing for, currently iOS (iPhone-only), Blackberry, and Android. Simulators only go so far, but I'm placing into the process a step to test software on the devices themselves. The problem we're finding is with Android. I don't think any of us here ever really understood just how fragmented the whole platform is until we started looking at devices to acquire. We are going to wait until v2.3 of Android is released, but which products to choose? Do we go by the most popular by market share? Do we get a small range of products by specs from least to most powerful overall? We're trying to avoid having to manage a dozen different devices to test each app, if not because of cost if only for the repeated time sink. How do you manage the testing of your Android software on physical devices?

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • ArchBeat Link-o-Rama for October 22, 2013

    - by OTN ArchBeat
    The road ahead for WebLogic 12c | Edwin Biemond Oracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 2: Setup and Configuration | Michael Rainey Michael Rainey continues his series with another technical article for you GoldenGate fans. There's A Virtual Developer Day in Your Future Have you experienced OTN VDD? Relax, it's not something that requires medical attention. But an OTN Virtual Developer Day event will enlarge your brain with hands-on information on Oracle technologies. Upcoming events will cover Oracle WebLogic and Coherence (Nov 5) and Oracle ADF (Nov 19). My Summary of Oracle Open World 2013 | Luis Weir SOA/Middleware specialist Luis Weir's first trip to Oracle OpenWorld was what you might call a total immersion experience. His blog post includes details about what kept him very, very busy during his OOW13 experience. Live Blog: Book Review of Building Modular Cloud Apps with OSGi by Bert Ertman and Paul Bakker | Lucas Jellema This interesting post from Oracle ACE Director Lucas Jellema is a work in progress. He's updating as he goes. Check it out. Thought for the Day padding-left: 20px; padding-right: 20px;"> "In the information age, you don't teach philosophy as they did after feudalism. You perform it. If Aristotle were alive today he'd have a talk show." — Timothy Leary, American psychologist and writer (October 22, 1920 – May 31, 1996) Source: brainyquote.com

    Read the article

  • missing libjpeg.so.62 from ia32 shared library

    - by user170200
    I am trying to install a chemical/molecular biology modeling program called Molsoft ICM-Pro. Initially after downloading the program and trying to open, it gave me error messages that I was missing shared libraries, and after talking with my network administrator he recommended I install the ia32 shared libraries using sudo apt-get install ia32-libs Which gives sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. so I am assuming the libraries installed correctly, but now when I try to run the program I get this error: ubuntu:/home/reilly/icmd icm icm: error while loading shared libraries: libjpeg.so.62: cannot open shared object file: No such file or directory So my question is, where can I get the library containing libjpeg.so.62? Additionally, I was told I would need libXmu.so.6 and libtiff.so.3 . Is there a shared library that could be missing that would contain these files? I am an ubuntu noob, so sorry if the information I provided was unclear. Any help would be immensely appreciated! btw I am using ubuntu 12.04 dual boot with windows on an HP pavilion dv6

    Read the article

  • C-Objective Function

    - by nimbus
    I'm unsure about how to make MWE with C-Obective, so if you need anything else let me know. I am trying running through a tutorial on building an iPhone app and have gotten stuck defining a function. I keep getting an error message saying "use of undeclared indentifer." However I believe I have initiated the function. In the view controller I have: if (scrollAmount > 0) { moveViewUp = YES; [scrollTheView:YES]; } else{ moveViewUp = NO; } with the function under it - (void)scrollTheView:(BOOL)movedUp { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; CGRect rect = self.view.frame; if (movedUp){ rect.origin.y -= scrollAmount; } else { rect.origin.y += scrollAmount; } self.view.frame = rect; [UIView commitAnimations]; } I have initiated the function in the header file (that I have imported). - (void)scrollTheView:(BOOL)movedUp; Any help would be appreciated, thank you in advanced

    Read the article

  • Internship in License Contract Management

    - by cristian.condurache(at)oracle.com
    Hi Everyone, My name is Luca. I am an intern in the License Contract Management team in Italy. I have studied Economics and Business in Pescara and finished my Master’s Degree in July 2009. After a short work experience near my home town I decided to look for a job in an International Company. I got in touch with Oracle in January 2010. I had a telephone interview and then a face-to-face interview. On a cold and grey morning, I arrived in Milan....my first impression was fantastic....a big modern building with wide TVs everywhere. I was a little nervous but very excited. I understood this could be a great opportunity... The interview went well and I started to work in March. After a training period I was quickly involved in the closing of the last quarter of the fiscal year - of which May is the last month at Oracle. Working as a License Contract Manager is a real challenge for a fresh graduate. It involves thoroughly understanding the Oracle Policies and Practices with regards to License Contracts. In my experience, especially in May, I learnt to work under high pressure, within time constrains, and to keep up with constant changes. In this period I also had the opportunity to be involved in different negotiations, being directly in contact with the customers. This helped me to develop my relational skills during complex transactions. Looking back at the nine months at Oracle I can say I have a better understanding of the IT world. It is a complex environment that changes continously, offering new challenges to learn from everytime. If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: License Contract Management,oppotunity,Oracle Policies,internship

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • How to design highly scalable web services in Java?

    - by Kshitiz Sharma
    I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - Building highly scalable web services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - Rest based Web services using Apache CXF Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) Tomcat 6.0 MySql 5.5 My questions are - Are there alternatives to Mysql that offer better performance for what I'm trying to do? What are some general things to abide by in order to scale a Java based web application? I am thinking of putting my Application in two tomcat instances with httpd redirecting the request to appropriate tomcat on basis of load. Is this the right approach? Separate tomcat instances can help but then database becomes the bottleneck since both applications access the same database? I am a programmer not a Db Admin, how difficult would it be to cluster a Mysql database (or, to cluster whatever database offered as an alternative to 1)? How effective are caching solutions like EHCache? Any other general best practices? Some clarifications - Could you partition the data? Yes we could but we're trying to avoid it. We need to run a lot of data mining algorithms and the design would evolve over time so we can't be sure what lines of partition should be there.

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Presenting at VS Live! Orlando in December

    - by Steve Michelotti
    I’ll be presenting at VS Live! December 10-14 in Orlando, FL. I’ll be presenting Azure Web Sites. This is the session abstract: Azure Web Sites brings a whole new level of power and simplicity to cloud computing. This demo-heavy session will show numerous features that allow you to deploy your site in a matter of seconds. Whether you are building a completely custom app or deploying from one of the numerous templates provided (such as WordPress), you’ll be up and running in no time. Want to use Node.js or PHP and deploy from Git? No problem! Azure Web Sites gives you the power of elastic scaling while still providing streamlined development and an effortless deployment experience. This presentation will also cover features including monitoring, custom domains, working with SQL databases or more!   SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Register here: http://bit.ly/VOSPK19Reg  and use code VOSPK19. The great part about Visual Studio Live!: four events in one! This year, the event will be co-located with SQL Server Live!, SharePoint Live!, and Cloud & Virtualization Live!. You can customize your conference agenda and attend ANY sessions from all four events. Register now: http://bit.ly/VOSPK19Reg

    Read the article

  • Quick Script for Adding Skype Groups

    - by Robert May
    So, I needed to add about 30 people to several different Skype groups today, and I didn’t want to repeat the /add [skypename] thing over and over and over.  Building the list was a pain . . . I couldn’t find a good way to extract all of the users in an existing group.  There’s probably an api or something, but I just did that part by hand. Adding them to the groups was pretty easy with Windows Scripting Host.  Basically, I just ran this: <package>    <job id="vbs">       <script language="VBScript">          set WshShell = WScript.CreateObject("WScript.Shell")          WshShell.AppActivate 4484          WScript.Sleep 100          WshShell.SendKeys "/add user1~"          WScript.Sleep 100 …          WshShell.SendKeys "/add usern~"          WScript.Sleep 100       </script>    </job> </package> Add as many users as you need by copying the sendkeys and sleep lines.  Then, save the script to a .wsf file.  The AppActivate line needs to be changed to have the process id of skype instead of the number there.  To get that, open up Task Manager, click on Processes, then find skype.exe and find it’s PID. Before you double click on the file in windows explorer, you’ll need to have created the groups in skype.  For each group, open the group, and click in the chat window of the group.  Then double click on the WSF file.  If you don’t click in the chat window, you will likely get the add user dialog box instead of just adding the users. Technorati Tags: Skype,Script

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >