Search Results

Search found 25974 results on 1039 pages for 'source routing'.

Page 641/1039 | < Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • Hot fix published for TFS2010 upgrade issues

    - by jehan
    Microsoft has released a hot fix for the issues that are identified after the migration of TFS2005/TFS2008 servers to TFS2010. The issues are related to Merging and Labels: ·         Labels that were created before the upgrade are entirely empty.  Labels could be also have incorrect contents. ·         The merge wizard in Visual Studio does not display all valid merge targets for a given source path/branch. ·         During merging, merge candidates are shown for changes that were already merged prior to the upgrade. If you have not yet upgraded to TFS 2010, the hotfix is now available and is highly recommended to be applied before configuring your team project collections. Because this hotfix applies to the upgrade of version control content, it must be applied after TFS 2010 setup is complete, but before configuration is started.  At the end of the setup experience, the Success screen is shown indicating the completion of the installation.  Normally, users will continue on to the configuration part, but in this case, the user need to cancel the configuration part by un-checking the “Launch Team Foundation Server Configuration Tool” box, which will enable the Cancel button. After exiting setup, the hotfix executable can be run to update the upgrade steps. Once the hotfix is installed, the TFS Configuration Wizard will need to be re-launched from the Start Menu to complete the upgrade process.    The hotfix has been published on MSDN Code Gallery – you can find it here: http://code.msdn.microsoft.com/KB2135068   If you have upgraded to TFS2010 and facing any of the above issues, then checkout this KB for Resolution: http://support.microsoft.com/kb/2193796/en-us

    Read the article

  • How to distribute python GTK applications?

    - by Nik
    This is in correlation with the previous question I asked here. My aim is to create and package an application for easy installation in Ubuntu and other debian distributions. I understand that the best way to do this is by creating .deb file with which users can easily install my application on their system. However, I would also like to make sure my application is available in multiple languages. This is why I raised the question before which you can read here. In the answers that were provided, I was asked to use disutils for my packaging. I am however missing the bigger picture here. Why is there a need to include a setup.py file when I distribute my application in .deb format? My purpose is to ensure that users do not need to perform python setup.py to install my application but rather just click on the .deb file. I already know how to create a deb file from the excellent tutorial available here. It clearly shows how to edit rules, changelog and everything required to create a clean deb file. You can look at my application source code and folder structure at Github if it helps you better understand my situation. Please note I have glanced through the official python documentation found here. But I am hoping that I would get an answer which would help even a lame man understand since my knowledge is pretty poor in this regard.

    Read the article

  • How to learn the math behind the code?

    - by Solomon Wise
    I am a 12 year old who has recently gotten into programming. (Although I know that the number of books you have read does not determine your programming competency or ability, just to paint a "map" of where I am in terms of the content I know...) I've finished the books: Python 3 For Absolute Beginners Pro Python Python Standard Library by Example Beautiful Code Agile Web Development With Rails and am about halfway into Programming Ruby. I have written many small programs (One that finds which files have been updated and deleted in a directory, one that compares multiple players' fantasy baseball value, and some text based games, and many more). Obviously, as I'm not some sort of child prodigy, I can't take a formal Computer Science course until high school. I really want to learn computer science to increase my knowledge about the code, and the how the code runs. I've really become interested in the math part after reading the source code for Python's random module. Is there a place where I can learn CS, or programming math online for free, at a level that would be at least partially understandable to a person my age?

    Read the article

  • How to Implement Complex Form Data?

    - by SoulBeaver
    I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple multipart/form-data request with the following form schema (Source): Client <html> <body> <h1>File Upload with Jersey</h1> <form action="rest/file/upload" method="post" enctype="multipart/form-data"> <p> Select a file : <input type="file" name="file" size="45" /> </p> <input type="submit" value="Upload It" /> </form> </body> </html> Server @POST @Path("/upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List<FormDataBodyPart> artists = multiPart.getFields("artist"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List<FormDataBodyPart> tracks = multiPart.getFields("track"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), "Foo"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats application/x-www-form-urlencoded, multipart/form-data, or text/plain. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data.

    Read the article

  • How do I do a cross-platform backup/restore of a DB2 database?

    - by Pridkett
    I need to dump a couple of databases from DB2 for Mac and DB2 for Linux and then import the databases to DB2 for Windows. Unfortunately, when I try the standard backup and restore I get the following error: SQL2570N An attempt to restore on target OS "NT-32" from a backup created on source OS "?" failed due to the incompatability of operating systems or an incorrect specification of the restore command. Reason-code: "1". I've seen references to DB2 needing an IXF dump and import, but I can't find any solid information about how to do this without dozens of other steps. Any hints on how to do this in the least painful manner?

    Read the article

  • XNA Quadtree with LOD

    - by Byron Cobb
    I'm looking to create a fairly large environment, and as such would like to implement a quadtree and use LOD on it. I've looked through numerous examples and I get the basic idea of a quadtree. Start with a root node with 4 vertices covering the whole map and divide into 4 children nodes until I meet some criteria(max number of triangles) I'm looking for some very very basic algorithm or explanation with respect to drawing the quadtree. What vertices need to be stored per iteration? When do I determine what vertices to draw? When to update indices and vertices? Hope to integrate the bounding frustrum? Do I include parent and child vertices? I'm looking for very simple instruction on what to do. I've scoured the internet for days now looking, but everyone adds extra code and a different spin without explanation. I understand quadtrees, but not with respect to 3d rendering and lod. A link to an outside source will probably have been read by myself already and won't help. Regards, Byron.

    Read the article

  • Logarithmic spacing of FFT bins

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • Atelier gratuit : Découvrir la solution d'exploration de données structuré et non structuré

    - by David lefranc
    Explorer et découvrir l’information… Nous vous proposons un atelier découverte pour vous permettre d’explorer toute type de données grace à la solution Oracle Endeca Information Discovery. Quand : 7 Décembre 2012 De 9h30 à 12h30  Lieu : Oracle 15 Boulevard Charles de gaulle 92715 Colombes Pour s'inscrire : [email protected] Réalisé pour des utilisateurs métiers, cet atelier vous permettera en une demi journée , de découvrir Oracle Endeca Information Discovery afin de : Comprendre et explorer toute information venant de différents horizons ( Big Data, réseaux sociaux, forums, sondages, blogs..) Découvrir en quoi et comment OEID est un complément à des solutions de BI classiques Par une navigation simple et rapide, vous découvrirez combien il est facile de trouver des réponses à des questions imprévues en utilisant OEID sans formation préalable. Utilisez la recherche et la navigation guidée pour voir comment les informations structurées et non structurées peuvent être rapidement réunies pour dégager la valeur cachée. Explorer toutes vos données dans n'importe quel format et à partir de n'importe quelle source, y compris les médias sociaux, documents, fichiers,…. Pouvoir découvrir et explorer vos données sans référentiel pour permettre aux utilisateurs d’être autonome et d’analyser leurs propres données de manière rapide Élaborer une stratégie visant à accroître la valeur des données de l'entreprise tout en réduisant le coût total de possession Découvrez l'incroyable performance d’ Endeca sur Oracle Exalytics la machine In Memory Agenda Après une introduction sur la solution Oracle information Endeca, suivi d’un atelier, vous verrez comment il est facile de: Utiliser la navigation guidée et le moteur de recherche pour explorer les données structurées et non structurées intégrer rapidement les nouvelles sources de données comme les médias sociaux Construire de nouvelles interfaces utilisateur tout en découvrant l’information répondre rapidement aux besoins changeants des entreprises et des environnements de données Quand Lieu 7 Décembre 2012 De 9h30 à 12h30 Oracle 15 Boulevard Charles de gaulle 92715 Colombes

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-15

    - by Bob Rhubart
    URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users All desktop administrators must IMMEDIATELY disable the Java Runtime Environment (JRE) Auto-Update option for all Windows end-user desktops connecting to Oracle E-Business Suite Release 11i, 12.0, and 12.1. WebLogic JMS / AQ bridge with JBoss AS 7 | Edwin Biemond Oracle ACE Edwin Biemond explains "how you can retrieve JMS messages from JBoss with the help of a WebLogic Foreign Server and how to push messages to JBoss AS with the help of a WebLogic JMS Bridge." The Healthy Tension That Mobility Creates | Hernan Capdevila "Mobile device management in the cloud makes good sense," says Hernan Capdevila. "I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service." OPN: Fusion Middleware Summer Camps in July in Lisbon and Munich For specialized Oracle Partners. Participation is limited to two people per company at each bootcamp. Registration is first come first serve. Take note of the skill requirements and, prerequisites. Podcast: Cows in the Cloud and the importance of standards In part two of a four-part program Cloud experts Jim Baty, Mark Nelson, William Vambenepe, and Ajay Srivastava explain cows in the cloud and talk about the importance of standards. Community members talk about the challenges and opportunities mobile computing presents for IT architects. Apple has sold 55 million iPads since 2010. Gartner expects a 98% increase in tablet sales in 2012, to 118 million. Nielsen reports that smartphones now account for nearly half of all mobile phones in the U.S., a 38% increase over 2011. And the mobile juggernaut is just getting started. Thought for the Day "Why are video games so much better designed than office software? Because people who design video games love to play video games. People who design office software look forward to doing something else on the weekend." — Ted Nelson Source: SoftwareQuotes.com

    Read the article

  • Disk Response Time in Windows 7 Resource Monitor?

    - by Keith Nicholas
    In the resource monitor I'am looking at the disk response time. There are a lot of processes where the response time is thousands of milliseconds consistently, I'm pretty sure this is the source of my computer slowing down. I'm not sure what normal response times are though? I'm running win 7 64bit ultimate. This is running on a new computer, i5 with a terabyte drive, 4gigs of ram, etc, disk is still pretty much empty, so it should all be pretty snappy. And if it is going really slow, how do I track down whats causing it? I've turned off things like real time virus protection as experiments to see if there is something weird there, but makes no real difference (other than it doesn't contribute to the problem by accessing the disk)

    Read the article

  • Nokia Lumia Windows Phones Coming To India On Nov 14

    - by Gopinath
    Nokia released it’s first set of smartphones, Lumia 800 & Lumia 710,  powered by Microsoft’s Windows Phone OS few weeks ago in Europe. India being it’s one of the favourite markets, Nokia is all set to launch the Lumia on November 14 in New Delhi. Unlike Apple who releases iPhones in India very late, Nokia is planning to bring its flagship smart mobiles very early to Indian market.  This is a good move by Nokia to keep it’s existing market share that is continuously challenged by Android OS smart phones from various manufactures. Nokia Lumia 710 runs on version 7.5 of Windows Phone OS(nick named as Mango) with 512 MB RAM, 8 GB internal storage capacity, 3.7″ WVGA TFT display, WIFI, GPS,  and 5 MP camera. Price details of the phone is not available but to be competitive in the market it should be priced some where between 25,000 to 30,000. Lets wait two more days and we will get the full details after the press release on Nov 14th. source: nirmaltv This article titled,Nokia Lumia Windows Phones Coming To India On Nov 14, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Read data from separate CSV file in OpenOffice Calc

    - by Thomi
    Hi, I have a very large CSV file (many thousands of rows) that I want to work with. Due to the size of the file, I don't want to import it into openoffice. Rather, I want to create a spreadsheet that contains formulas & graphs that read from this (or any other) CSV file I point it it. Ideally the spreadsheet will ask me what CSV file I want to use, allowing me to change the data source dynamically. Any ideas?

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • Rugged Netbook and software options for Running R statistical software?

    - by Thomas
    I am interested in purchasing a netbook to do field research in another country. My hardware specifications for the nebtook: -Be rugged enough to survive a bit of wear and tear -Fairly fast processing (the ability to upgrade from 1GB of RAM to 2GB) -A battery life of longer than 6 hours -At least a 10 inch screen -A decent camera for Skyping My software needs: -Be able run a Spreadsheet program to do basic data input (like Excel or Open Office) -Use the Open Source Statistical Program R to do basic data analysis (Regression, data analysis etc.) -Word Processing (Word or Open Office) Do you have any suggestions on which models or brands my fit my needs? Some of the models I was considering: Samsung NB-30 Toshiba NB 305 Asus Eee PC 1005HA Lenovo S10-2 Also any recommendations on the best software to load on the aforementioned Netbooks. I saw this resource from [livehacker][6] Any comments. Thanks for your help! -Thomas

    Read the article

  • .NET vs Windows 8: Rematch!

    - by Simon Cooper
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows?

    Read the article

  • .NET vs Windows 8: Rematch!

    - by simonc
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows? Cross posted from Simple Talk.

    Read the article

  • Load login shell inside user cronjob

    - by sa125
    I'm trying to run a rake task using a scheduled cronjob. My crontab looks something like this: 0 1 * * 1-7 /bin/bash -l -c "cd ~/jobs/rake && rake reports:create >> ~/jobs/logs/cron.log" Ruby on my account is provided by RVM, which is loaded via ~/.bashrc (before the no-interaction check): # load RVM env [[ -s $HOME/.rvm/scripts/rvm ]] && source $HOME/.rvm/scripts/rvm # If not running interactively, don't do anything [ -z "$PS1" ] && return # ... rest of logic Time and again, this task fails to run since RVM isn't loaded when the task is called (uses system's /usr/bin/ruby instead), and gem dependencies are missing. How can I make crontab load my shell environment before executing my scheduled jobs? thanks.

    Read the article

  • Mapping a Piped Shell Command in Vim

    - by michaelmichael
    In a previous question I asked about mapping evaluated code to a new window in MacVim. I got a great solution, but it presented another question: How can I map a key command in my .vimrc that involves piping output in the shell? As a simple example, let's say I wanted to pipe the results of ls -a to a new MacVim window. From the Vim command line I can enter !ls -a | mvim -, and the results will appear in a new window. Great! Now, I add that to my .vimrc: nmap <Leader>r :w !ls | mvim<CR> Vim now throws an error every time I try to source my .vimrc, which reads as follows: E492: Not an editor command: mvim<CR> Any ideas on how to overcome this?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Windows Media Player crashes on startup after CPU upgrade

    - by NathanE
    I upgraded my CPU, reactivated Windows Vista etc. But WMP has stopped working. It just crashes almost immediately after starting up. If I look in Event Viewer it mentions the source as being "Indiv01.key". I have searched my OS drive for this file but it does not exist anywhere. Some Googling has revealed it seems to be a common problem and that it is DRM related. Though there doesn't seem to be any concrete solution. Any ideas? Thanks!

    Read the article

  • How to get rid of grub menu after boot?

    - by umpirsky
    Here is my /etc/default/grub: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" I tried various things including: How do I hide the GRUB menu showing up in the beginning of boot? How to disable Grub's menu from showing up after failed boot http://www.itworld.com/software/306238/disable-grub-boot-menu-ubuntu-1210 But I still get grub menu each time I boot. My generated /boot/grub/grub.cfg: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=hidden set timeout=0 # Fallback hidden-timeout code in case the timeout_style feature is # unavailable. elif sleep --interruptible 0 ; then set timeout=0 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 45,51,53; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' $menuentry_id_option 'osprober-gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-24-generic } } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### menuentry 'System setup' $menuentry_id_option 'uefi-firmware' { fwsetup } ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

< Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >