Search Results

Search found 72394 results on 2896 pages for 'install name tool'.

Page 510/2896 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • Writing xml with powershell

    - by alex
    i have a script that get all the info i need about my SharePoint farm : [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") > $null $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local $websvcs = $farm.Services | where -FilterScript {$_.GetType() -eq [Microsoft.SharePoint.Administration.SPWebService]} $webapps = @() foreach ($websvc in $websvcs) { write-output "Web Applications" write-output "" foreach ($webapp in $websvc.WebApplications) { write-output "Webapp Name -->"$webapp.Name write-output "" write-output "Site Collections" write-output "" foreach ($site in $webapp.Sites) { write-output "Site URL --> -->" $site.URL write-output "" write-output "Websites" write-output "" foreach ($web in $site.AllWebs) { write-output "Web URL --> --> -->" $web.URL write-output "" write-output "Lists" write-output "" foreach ($list in $web.Lists) { write-output "List Title --> --> --> -->" $list.Title write-output "" } foreach ($group in $web.Groups) { write-output "Group Name --> --> --> -->" $group.Name write-output "" foreach ($user in $group.Users) { write-output "User Name --> --> --> -->" $user.Name write-output "" } } } } } } i want to make the output to an XML file and then connect the xml file to HTML and make a site of it for manager use how can i do it ? thanks for the help !

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • Scanning to network share

    - by tking
    In our environment we have several printers with the scan to folder functionality set up. It worked pretty well. Long story short- I needed to reboot our file server one evening (the server whose $hares where all the scans go), upon on boot up I received a message stating the name of the file server (i.e AcmeFS1; Windows Server 2003) could not be used because there was a duplicate name on the network. I found that somehow another printer on the network was using the name of the file server. Weird - I know. People could no longer scan to their folders. I ran an nbtstat on the file server and found that the local netbios name table was empty. I renamed the offending printer and rebooted the file server once more. This time, no error, and once I ran nbtstat again, I found that correct name and domain in the local netbios name table. Problem is, scan to folder is still not working. I know this was working before I rebooted the file server the first time. Anyone have any idea what is going on and how to fix? Thanks. FIXED: Not sure why the reboot didnt fix it but I restarted the "Server" service on the file server and the problem went away.

    Read the article

  • Mobile app for sysadmins with monitoring and fixing tools(SSH, ping, traceroute) [closed]

    - by Roman
    I present a start-up company which is working on a new mobile tool for system administrators. Our team has released first several versions of Server Auditor which is now just a SSH terminal with special UI approach for touch devices and got quite good feedbacks, e.g. iOS and Android. Now we are thinking about adding extra features to make Server Auditor a tool number one for all system administrators and would like to know your opinion. Main question would you use a tool like Server Auditor with extra features described below: Fast problem fixing - preloaded recipes/snippets, e.g. clean logs, restart a process, reboot etc. Secure user data synchronisation(IP/DNS name, connection options, keys, snippets) across all your devices iPhone and Android. Built-in tools like ping, traceroute, whois System status integration - you can observe information about the system in a friendly way, e.g CPU load, hard drive and RAM usage etc. Monitoring tool integration. Your servers are watched by our Nagios-like system in the cloud and you get notified by push-notifications/SMS. Similar products are Server Density, CopperEgg. If we start to implement features from 1 to 5 when you will be ready to start use it or even potentially pay for it? Can you see any issues that would prevent you from using this kind of system? Thank you a lot for your time, we kindly appreciate it. Looking forward to hear your opinion

    Read the article

  • /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit.

    Read the article

  • How to fix "BASIC runtime error 1; Type: com.sun.star.uno.Runtime Exception, Message: Toolbar do not exist " error in Libreoffice calc, Ubuntu 12.04

    - by PDeb
    I get the following error while openning a .xls file in Libreoffice Calc, Ubuntu 12.04 [LibreOffice 3.5.5.2; Build ID: 350m 1 (Build:2)] To overcome this I checked the LibreOffice Security Level to Low under Macro Security (from Tools---- Options--- Security tab. Then I went ahead intalling java by running the following commands from the terminal window (with some tips from various forums) sudo add-apt-repository ppa:libreoffice/ppa sudo apt-get update sudo apt-get install libreoffice libreoffice-java-common libreoffice-math libreoffice-gnome libreoffice-java-common Still I got the BASIC runtime error (as in the title), even after clicking Tools---- Options ---- Java and checking the 'Use a Java runtime environment option' and then clicking on 'Sun Microsystems Inc' under listed JRE environments installed. Even I ran the following commands to install latest Java run time environments from terminal window sudo apt-get install openjdk-7-jre icedtea-7-plugin But still I get the same Basicruntime error (details as in the title bar). This particular file opens perfectly in Microsoft Excel 2007, in Win XP Professional.

    Read the article

  • Error: cluster_port_ready: could not find psql binary

    - by Christoffer D. Brammer
    When I use apt-get install i get the error: Error: cluster_port_ready: could not find psql binary What do I do here? I have tried to remove PostgreSQL 8.2 but it gives this error: The following packages have unmet dependencies: postgresql-8.2: Depends: libkrb53 (>= 1.6.dfsg.1) but it is not installable Depends: postgresql-client-8.2 but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And then I end up at the start, with the error when trying apt-get -f install. /Christoffer

    Read the article

  • Using XNA ContentPipeline to export a file in a machine without full XNA GS

    - by krolth
    My game uses the Content Pipeline to load the spriteSheet at runtime. The artist for the game sends me the modified spritesheet and I do a build in my machine and send him an updated project. So I'm looking for a way to generate the xnb files in his machine (this is the output of the content pipeline) without him having to install the full XNA Game studio. 1) I don't want my artist to install VS + Xna (I know there is a free version of VS but this won't scale once we add more people to the team). 2) I'm not interested in running this editor/tool in Xbox so a Windows only solution works. 3) I'm aware of MSBuild options but they require full XNA I researched Shawn's blog and found the option of using Msbuild Sample or a new option in XNA 4.0 that looked promising here but seems like it has the same restriction: Need to install full XNA GS because the ContentPipeline is not part of the XNA redist. So has anyone found a workaround for this?

    Read the article

  • Ubuntu 12.04 on MAC Mini G4

    - by Buford Speed
    I have a MAC Mini G4. I would like to install Ubuntu on it. I have looked at this Q&A Mac G4 with no OS: can Ubuntu be used?, and I have a question. Is it worth the hassle to install Ubuntu, or should I keep MAC OS. The unit has MAC OS 10.4 (Tiger). It is used for normal day to day operations, word processing, spreadsheets, browsing, and keeping track of finances. I'd like to install Ubuntu if I can. So is it possible?

    Read the article

  • Problem bash completion apt-get 12.10

    - by dadexix86
    I've got an annoying problem with completion and sudo apt-get. To give an example: $ sudo apt-get in[Tab][Tab] in intel_bios_reader includeres intel_disable_clock_gating indicator-multiload intel_dpio_read info intel_dpio_write infobrowser intel_error_decode infocmp intel_forcewaked infokey intel_gpu_abrt infotocap intel_gpu_time inimf intel_gpu_top init intel_gtt init-checkconf intel_l3_parity initctl intel_reg_checker initctl2dot intel_reg_dumper initex intel_reg_read inkscape intel_reg_snapshot inkview intel_reg_write inputattach intel_sprite_on insmod intel_stepping install intel_upload_blit_large install-docs intel_upload_blit_large_gtt installfont-tl intel_upload_blit_large_map install-info intel_upload_blit_small installkernel interdiff --More-- While is working right both with just apt-get or doing it in root: $ apt-get in[Tab]stall $ sudo -i [sudo] password for davide: root@brenna:~# apt-get in[Tab]stall So the problem is using autocompletion after sudo? Not really, because $ sudo apt-[Tab][Tab] apt-add-repository apt-extracttemplates apt-key apt-cache apt-file apt-mark apt-cdrom apt-ftparchive apt-sortpkgs apt-config apt-get Summing up, the problem seems to be using sudo and auto-completion for programs options together. Any good advice for that?

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Upgrade Kubuntu 11.10 to 12.10 I get error code that 11.10 is outdated

    - by Bobby Dedman
    I installed Kubuntu 11.10 on top of Windows 7. Everything works fine. I tried to update 11.10 to 12.10 and I get error Code that 11.10 is outdated and not supported. Update stops. I downloaded 12.10 iso but there is no upgrade option, just install. How can I remove the 11.10 installation and install 12.10. (Without breaking my Windows 7 install.) I need dual boot option. Thanks for the support. Bobby D.

    Read the article

  • Ubuntu 12.04 installation aborts without giving any errors on Sony Vaio

    - by Guilherme Simoni
    I'm not able to install the release ubuntu-12.04-desktop-i386 on the laptop below: Sony Vaio VGN-FE21H CPU: Intel Core Duo T2300 1.66GHz Memory: 2GB DDR2 533MHz HDD: 100GB Graphics: NVIDIA GeForce 7400 256MB I'm using the ISO "ubuntu-12.04-desktop-i386.iso" burned into a DVD. I know the ISO is OK because I used it to successfully install on Virtualbox. Live DVD boots and runs OK, but I cannot install from it or directly from the boot menu. The installation goes through all the steps until the final part where is asked the Name, Name of PC and password. The problem is in the next step where it should start copying files and present some screens and features of Ubuntu. In this part the installation just close without any error message. If I am running the installation inside the live DVD it closes and returns to the home screen of the Live. If I am running straight from the boot it closes the graphic interface and restarts the PC. Does anybody know or faced the same problem?

    Read the article

  • SQL SERVER – Fix: Error: File cannot be loaded because the execution of scripts is disabled on this system. Please see “get-help about_signing” for more details

    - by pinaldave
    Yesterday I formatted my computer and did fresh install as it was due from long time. After the fresh install when I tried to install Semantic Search application using powershell, I was stopped by following error. File cannot be loaded because the execution of scripts is disabled on this system. Please see “get-help about_signing” for more details Fix/Solution/Workaround: The solution is very simple. Open the Powershell window and type following two lines and everything will fine right after that. Set-ExecutionPolicy Unrestricted Set-ExecutionPolicy RemoteSigned Again, this is I have done for my environment where I am very careful what I will run. You can change the policy back to original restricted policy if you want to restrict future execution of the powershell scripts. Simple – isn’t it? Well all complex looking problems are very simple to solve. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • Will it be possible to use a non-pae kernel in 12.10

    - by Roland Taylor
    I know that Ubuntu +1 questions are frowned upon, but this I believe is a fair exception. Currently I have 2 systems running Ubuntu 12.10, and one of them has a Pentium M that doesn't support PAE (strange I know, but true). This has meant in the past that I had to rely on a custom iso to install Ubuntu a similar system,and so this time I went with Xubuntu 12.04. My question is 2 fold, but really one question: Is it/will it be possible to install a non-pae version of the 12.10 kernel from the standard repositories? If no, how can I get such a kernel? (Is there a PPA with such a kernel available?). NB: Before anyone suggests that I just install this package: http://packages.ubuntu.com/quantal/linux-image-generic, please note that this comes with PAE enabled. P.S. Yes, I have Googled. I haven't found the answer.

    Read the article

  • Ubuntu 12.04 installation never finishes

    - by Eric Carlsson
    I am trying to install Ubuntu 12.04 on an old Dell Latitude C400 laptop previously running windows XP using an Ubuntu install CD. The OS runs fine off the CD if a bit slow, and the installation starts off fine. However once the progress bar is filled I get put back to the desktop and nothing happens, first I assumed that the installation was completed but when I restarted the computer and booted from the hard drive nothing happened. I tried to install a few more times but the same thing happens. Am I doing something wrong? I do not get a error message or a installation complete notification. Is there any way I could get some kind of debug screen or log to see if something went wrong or at least follow the installation progress in more detail. Thanks for the help.

    Read the article

  • Add Hotmail & Live Email Accounts to Outlook 2010

    - by Matthew Guay
    Microsoft has recently been promoting upcoming updates to their Hotmail service, promising to make it an even better webmail service. But Microsoft’s revamped Outlook 2010 is already here. Here’s how to integrate Hotmail with Outlook. Outlook 2010 works with a wide variety of email accounts, including POP3, IMAP, and Exchange accounts.  The only problem with POP3 and IMAP accounts is that they only sync email, but not your calendar and contacts like Exchange does.  Hotmail, however, lets you sync your email, contacts, and calendar with Outlook with the Hotmail Connector.  This lets you keep all of your PIM data accessible from everywhere.  Let’s look at how we can set this up on our account. Getting Started The easiest way to add Hotmail to Outlook is to first install the Outlook Hotmail Connector (link below).  Make sure Outlook is closed first, and then proceed with the installation as usual. If you enter your Hotmail account into the New Account setup in Outlook before installing the Hotmail Connector, Outlook will prompt you to download the Hotmail Connector.  However, you’ll have to exit Outlook before you can install the Connector, and then will have to re-enter your information when you restart Outlook, so it’s easier to just install it first. Add Your Hotmail Account to Outlook Now you’re ready to add your Hotmail account to Outlook.  If this is the first time you’ve run Outlook 2010, you’ll be greeted with the following screen.  Click Next to proceed with setup. Then select Yes and click Next again. If you’ve already got an email account setup in Outlook, you can add a new account by clicking File and then selecting Add account. Now, enter your Hotmail account information, and click Next. Outlook will search for your account settings and automatically setup your account with the Hotmail connector we previously installed. If you entered your password incorrectly previously, you may see the following popup.  Re-enter your password and click OK, and Outlook will re-verify your settings. Once everything’s finished and setup, you’ll see the following completion screen.  Click Finish to complete the setup and check out your Hotmail in Outlook. Welcome to your Hotmail account in Outlook 2010.  You’ll notice a small notification at the bottom of the window notifying you that you’re connected to Windows Live Hotmail.  Now your email will synchronize with your Hotmail account, and your Outlook calendar and contacts will be synced with your Live calendar and contacts, respectively.  This is the closest you can get to full Exchange without an Exchange account, and in our experience it works great.  In fact, Hotmail Sync seems to work faster than IMAP sync for us. Setup Hotmail With POP3 Access If you need to access your Hotmail email account but don’t want to install the Outlook Connector, then you can add it with POP3 sync.  We recommend going with the Outlook Connector for the best experience, but if you can’t install it (eg. you’re not allowed to install applications on your work PC) then this is a good alternative. To do this, follow our tutorial on setting up a Gmail POP3 account in Outlook. Although the article concentrates on Gmail, the settings are essentially the same. The only thing you’ll want to change is the Incoming and Outgoing mail server. Incoming mail server – pop3.live.com Outgoing mail server – smtp.live.com User name – your Hotmail or Live email address Incoming Server (POP3) – 995 Outgoing Server (SMTP) – 587 Also, check This server requires and encrypted connection Just as in the Gmail example, select TLS for the type of encrypted connection.  Then, on the bottom, make sure to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from your Hotmail account online. Conclusion Even though Hotmail is generally not as popular as Gmail, it works great with Outlook integration.  If you’re a heavy user of Windows Live services, or want to try them out, Outlook Connector is the easiest way to keep your desktop activity synced with the cloud.  If you’re just one of the millions of Hotmail users who want to access their old Hotmail account alongside their other accounts, this method works great for you too. If you’re using Outlook 2003 or 2007, check out our article on using Hotmail from Microsoft Outlook. Links Download Outlook Hotmail Connector 32-bit Download Outlook Hotmail Connector 64-bit – note, only for users of Office 2010 x64 Similar Articles Productive Geek Tips Use Hotmail from Microsoft OutlookHow to add any POP3 Email Account to HotmailHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailManage Your Windows Live Account in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • Packages having unmet dependencies: Broken Packages

    - by Akarsh
    When I try to install postgresql i get the following error: sudo apt-get install postgresql-client Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libc6-dev : Breaks: gcc-4.4 (< 4.4.6-4) but 4.4.4-14 ubuntu5 is to be installed E: Broken packages How do I resolve this issue to install postgresql?

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • How do you uninstall wine 1.5?

    - by jeff
    I installed Wine through the terminal using these commands: sudo add-apt-repository ppa:ubuntu-wine/ppa sudo apt-get update sudo apt-get install wine1.5 I know that I've removed at least part of wine. I removed the .wine folder in my home folder and I managed to remove the repository via this command: sudo apt-add-repository --remove ppa:ubuntu-wine/ppa Now when I try to install wine 1.4 via Ubuntu software center, it tells me I must remove these items to install wine: Microsoft windows compatibility layer (binary emulator and library) wine1.5 Microsoft windows compatibility layer (64-bit support) wine1.5-amd64 Microsoft windows compatibility layer (32-bit support) wine1.5-i386:i386 I've already tried this command: sudo apt-get --purge remove wine but it said that wine wasn't installed. Some assistance would be appreciated.

    Read the article

  • Problem with ATI Radeon HD 7670M on Ubuntu 12.10

    - by Aniket
    I updated from Ubuntu 12.04 to 12.10 a couple of days back. My machine is a Dell Inspiron 15R with a Radeon HD 7670M Graphic Driver. When I was using 12.04, I was able to install the propriety driver using the notification you usually get. But after the upgrade, the graphic driver is not getting installed. I am not getting the notification to install the driver And I also tried to follow the steps given - What is the correct way to install ATI Catalyst Video Drivers (fglrx)? I tried the legacy as well as the usual installation procedure Both ways I failed and my graphics went to the fallback, disabling Unity Note: My laptop is getting heated to 80 degrees and can shut down at any point. Please help.

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >