Search Results

Search found 23614 results on 945 pages for 'update from'.

Page 498/945 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Map Caps-Lock to Control in Windows 8.1

    - by Eric Huang
    Before the Windows 8.1 update, I was able to map Caps-Lock to Controls through the type of registry tweak in this post: Remapping a keyboard key in windows 8.1 However, after updating to 8.1, my tweak no longer works. What I had done was Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout] "Scancode Map"=hex:00,00,00,00,00,00,00,00,02,00,00,00,1d,00,3a,00,00,00,00,00 Windows 8.1 may have changed how it interprets the keyboard layout registry, I'm guessing. I'm an avid emacs user, so this problem is a life-or-death scenario for me.

    Read the article

  • Development Server Blocked Only from Home

    - by theonlylos
    Recently I've been having an issue with my CentOS 6 test server running Apache and Webmin running on port 10000 where when I try accessing any part of the server - SSH/FTP and even my domains (I have two - both keep getting timeout errors) when I try accessing from any computer on my home network. However when I access via tethering or via my office networks everything loads fine. While the firewall is the first issue at mind, my router never was set to block any special ports, and even after adding port 10000 as a specific exception I'm having no luck. Also, I doubt this is an IP blacklisting issue because I have websites on other servers using CloudFlare for security and I haven't gotten any warnings. Any assistance is greatly apprecaiated. UPDATE: Just some extra details about the issue: My ISP to my knowledge only blocks off ports 25 and 80 for residential users to prevent them from running web servers - however this issue has only come up a day or two ago, before that I was using the server successfully for months. Also the server is not physically located in any of my workspaces - it's a VPS housed in a datacenter

    Read the article

  • How does ARM Cortex A8 compare with a modern x86 processor

    - by thomasrutter
    I was wondering how does a modern ARM chip based on ARM Cortex A8 compare, in clock-for-clock performance and capability, to a modern x86 chip such as a Core 2 Duo or Core i5? I realise due to the different instruction sets it'll depend heavily on what you're doing. To put it another way, rendering a web page in webkit on a 1GHz ARM Cortex A8 based chip should be about equivalent to doing in on a Core i5 at __ MHz? Update October 2013: Since I asked this question years ago it's become a lot more common, when reading about mobile devices, to see architecture-agnostic benchmarks that you can compare across platforms - for example, in-browser benchmarks like Sunspider in Webkit will run on just about anything and you see these in reviews all the time now. And there's things like Geekbench now.

    Read the article

  • mac osx snow leopard menu get stuck

    - by georgekk
    hi i am using mac osx 10.6.3 (snow leopard) and sometimes when i have a window selected the file menu doesnt update in relation to the window im currently on. i have to click around on different windows then go back to the window i want for the file menu to display. this really annoys me - the one thing that pisses me off on mac system. i wish they had copied how microsoft is doing it. anyone know a fix for this? or its something i have to live with.

    Read the article

  • Black screen on login, can get thru decrypt disk and access command line but no GUI

    - by t3lf3c
    Running 12.04 64 bit fresh alternative install, with disk crypto on a new Lenovo laptop Install didn't connect and install modules, even though I had the network cable plugged in and don't have any whacky proxy settings. I had to manually install ubunut-desktop and define sources after initial installation, so this seemed a bit weird (ISO matched MD5 sum though) I unplug the network cable, otherwise I get a black screen that I can do nothing with. So I turn laptop on, I have disk encryption, I type in the password at the Ubuntu decryption GUI then get "set up successfully" message "Waiting for network configuration ..." then "Waiting for up to 60 more seconds for network configuration" At this stage (a) If I wait for it then I get a black screen that I can do nothing with. (b) If I interrupt the process by pressing escape, then I break through to the command line. From the command line, I can go ahead and login, then plug my network cable in to do apt-get commands. As a precaution I do some house keeping which takes a few mins to run: sudo apt-get update sudo apt-get upgrade Running startx to get to the GUI gives: Fatal server errror: no screens found The .Xauthority file is being created in my home directory but it's empty. I review my order and note the system graphics: Intel HD Graphics (WWAN or mSATA capable) So it's weird that I can't get to the Gnome. It looks like drivers aren't working. Is there a way of getting Intel drivers from the command line? Or do you have any other suggestions on what to try next?

    Read the article

  • Streamline Active Directory account creation via automated web site

    - by SteveM82
    In my company we have high employee turnover, and hence our helpdesk receives about a dozen requests per week for new Active Directory accounts. Currently, we receive these requests simply via e-mail or voice-mail, and rarely do we have all of the information necessary to create the account. I would like to find a web application that can be used by a manager or supervisor to formalize the requests they make for AD accounts for new employees under their command. Ideally, the application would prompt for all of necessary information, and allow the helpdesk to review the requests and approve or deny each one. If approved, the application would take care of creating the account and send an e-mail to the manager. I have found several application on the Internet that handle self-service account management (i.e., password resets or update contact info), which is also nice to have, but nothing that streamlines the new account request and creation part. Can anyone make suggestions on such an application? Thanks.

    Read the article

  • What happens when more RAM is installed than the motherboard supports?

    - by DanDan
    I have a free RAM slot and some spare memory that will fit my computer. However the problem is my motherboard only supports 2GB and I have 2GB installed. What would happen if I plugged the spare memory in the RAM slot? The following things spring to mind: Nothing will happen It will work, computer becomes faster Computer becomes slower Explosion Undetermined (Any of the above) Does anyone have any experience of this? Update: Egged on by you zealous lot, I went ahead and stuck the extra memory in. It booted up! Unfortunately, the hunch of some has been proved correct. The memory is reported at the capped limit, rather then the actual available. A shame then! But thank you all for your suggestions, speculations and stories. For your reference, I am using a Dell Insprion 6000 with 2gb installed, latest drivers. I attempted to add 512mb, with no success.

    Read the article

  • Wacom Bamboo CTH460L issues

    - by Robert Smith
    I recently bought a Wacom Bamboo Pen & Touch CTH460L. I installed doctormo's PPA, however, the pen functionality didn't work and the touch was very glitchy (when I touched it, it immediately double clicked and began to drag elements in the screen). I tried to configure it using the wacom-utility package in the Synaptic Package Manager (version 1.21-1) but that didn't work either. Then I followed this post (#621, written by aaaalex), and after some problems trying to restart Ubuntu (graphics related problems), the pen works fine (it could be better, though) but the touch functionality doesn't work anymore. Currently I have installed xserver-xorg-input-wacom (1:0.10.11-0ubuntu7), wacom-dkms (0.8.10.2-1ubuntu1) and wacom-utility. The Wacom Utility only displays an "options" field under "Wacom BambooPT 2FG 4X5" but no other option to configure it. What is the correct way to get this tablet working on Ubuntu 10.04?. By the way, currently I can't start Ubuntu properly when the tablet is connected (in that case, Ubuntu start in low graphics mode). I need to connect it later. UPDATE: I uninstalled xserver -xorg-input-wacom, and wacom-utility because one of them prevented Ubuntu to start normally. I only re-installed wacom-dkms 0.8.10.2-1ubuntu1. The pen is working but no touch functionality. The side buttons don't work either. Thanks in advance.

    Read the article

  • How can I install Cinnamon on Ubuntu 12.04 and eliminate the following errors:

    - by jaorizabal
    $ sudo apt-get install cinnamon cinnamon-session cinnamon-settings Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'cinnamon' instead of 'cinnamon-session' Note, selecting 'cinnamon' instead of 'cinnamon-settings' Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help resolve the situation: The following packages have unmet dependencies: cinnamon : Depends: gir1.2-muffin-3.0 but it is not going to be installed Depends: libcogl5 (>= 1.7.4) but it is not installable Depends: libmuffin0 (>= 1.0.0-0ubuntu1~precise) but it is not going to be installed Recommends: gnome-themes-standard but it is not going to be installed Recommends: gnome-session-fallback but it is not going to be installed E: Unable to correct problems, you have held broken packages. I added this PPA: sudo add-apt-repository ppa:merlwiz79/cinnamon-ppa Then ran the following command: sudo apt-get update && sudo apt-get install cinnamon cinnamon-session cinnamon-settings How can I install the latest Cinnamon desktop? How can I fix this error?

    Read the article

  • Migrating data from SQL Server 2000 to SQL Server 2005

    - by Muhammad Kashif Nadeem
    I have to migrate existing data which is in SQL Server 2000 to SQL Server 2005. The schema of two databases is different. For Example Locations table in SS2000 is split into two tables and has different columns. This is one time activity. After successful migration I don't need old db anymore. What is the best way to transfer data from one SQL Server to another having different schemas? I can write stored procedures to fetch data from SQL Server 2000 and insert/update tables in SQL Server 2005. What about SSIS? I don't have any experience with this and is this better to create package of SSIS because I don't need this again and need to learn it first. Thanks.

    Read the article

  • Bug with Set / Get Accessor in .Net 3.5

    - by MarkPearl
    I spent a few hours scratching my head on this one... So I thought I would blog about it in case someone else had the same headache. Assume you have a class, and you are wanting to use the INotifyPropertyChanged interface so that when you bind to an instance of the class, you get the magic sauce of binding to do you updates to the UI. Well, I had a special instance where I wanted one of the properties of the class to add some additional formatting to itself whenever someone changed its value (see the code below).   class Test: INotifyPropertyChanged {     private string_inputValue;     public stringInputValue     {         get        {             return_inputValue;         }         set        {             if(value!= _inputValue)             {                 _inputValue = value+ "Extra Stuff";                 NotifyPropertyChanged("InputValue");                     }         }     }     public eventPropertyChangedEventHandler PropertyChanged;     public voidNotifyPropertyChanged(stringinfo)     {         if(PropertyChanged != null)         {             PropertyChanged(this, newPropertyChangedEventArgs(info));         }     } }   Everything looked fine, but when I ran it in my WPF project, the textbox I was binding to would not update? I couldn’t understand it! I thought the code made sense, so why wasn’t it working? Eventually StackOverflow came to the rescue, where I was told that it was a bug in the .Net 3.5 Runtime and that a fix was scheduled in .Net 4 For those who have the same problem, here is the workaround… You need to put the NotifyPropertyChanged method on the application thread! public string InputValue { get { return _inputValue; } set { if (value != _inputValue) { _inputValue = value + "Extra Stuff"; // // React to the type of measurement // Application.Current.Dispatcher.BeginInvoke((Action)delegate { NotifyPropertyChanged("InputValue"); }); } } }

    Read the article

  • Window borders missing - gtk-window-decorator segmentation fault

    - by Balakrshnan Ramakrishnan
    I have been using Ubuntu for about 1 year now and I got a problem just two days ago. Suddenly I started experiencing a problem with the window borders (title bar with close, maximize, and minimize buttons). The problem : The window borders disappear I run "gtk-window-decorator --replace" For like 20 seconds everything is back to normal But it again returns to the problem. I searched over the Internet and found that my problem is similar to what is specified in this bug report: https://bugs.launchpad.net/ubuntu/+source/compiz/+bug/814091 This bug report says that the "fix released". I updated everything using the Update Manager, but still the problem remains. Can anyone let me know whether the problem is fixed? If yes, can you please let me know how to do it? I have already tried normal replace/reset commands like unity --reset unity --replace compiz --replace The "window borders" plugin is on in CCSM (CompizConfig Settings Manager) and it points to "gtk-window-decorator". I use Ubuntu 11.10 on an Intel Core2Duo T6500 with AMD Mobility Radeon HD 4300 graphics card. If you need more information, please let me know.

    Read the article

  • BlueCoat reverse proxy NTLM authentication

    - by mathieu
    Currently when we want to access an internal site from Internet (IIS with NTLM auth), we have two login screens that appear : step1 : LDAPAuth, from the BlueCoat that check login/password validity against Active Directory step2 : NTLM auth, from our application. Is it possible to configure the reverse proxy to use the LDAP credentials provided at step1, and give them to whatever application that requests them ? Of course, if those credentials aren't valid, nothing happens. We're using BlueCoat SG400. Update : we're not looking for SSO where the user doesn't have to enter a password. We want the user to enter his domain credentials in the LDAPAuth dialog box, and the proxy to reuse it to authenticate against our application. Or any application that uses NTLM. We've only got 1 AD domain behind the reverse proxy.

    Read the article

  • Changed folder contents not updating in Finder (OS X 10.8)

    - by speedofmac
    I've been having this problem for a number of days now. When I update the contents of a folder using terminal commands, by decompressing archives, or by using "Save As", the affected folder often fails to reflect these changes. Sometimes it takes quite a while for any files to be shown, even if the total size of the folder is fairly small (< 10 MB). I'm running an '09 MacBook Pro 13", so it isn't the newest system, but it certainly has enough oomph to display a list of files in a folder. Does anyone know what might be causing this?

    Read the article

  • zabbix 2.2.1 no graphs in Web scenario

    - by Mick
    Hello for some time I have a problem with graphs in web scenarios on Zabbix 2.2.1, I put below the screen, this problem has appeared at every graph of web scenario This same scenario installed a second zabbixie that runs on my local virtual machine with zabbix. In my local machine all components of zabbix (server, frontend, agents), but in my production zabbix only zabbix frontend are separated from each other. Scenario for openerp ============================== Name: OpenERP Web Checks Application: New application: Authentication: Update interval (in sec): 60 Retries: 1 Agent: Internet Explorer 10.0 Steps: ============================== Name: OpenERP login page URL: http://openerp.test.com Post: Variables: Timeout: 15 Required string: Required status codes: 200 My zabbix server performance: Anybody have some idea how fix it ? Regards Mick

    Read the article

  • Updating Banshee to 2.4

    - by Lucasguy11
    I have banshee 2.2.1 with Ubuntu 11.10 I have been trying to update banshee to 2.4 (released yesterday) but it just isnt working, I have been using sudo add-apt-repository ppa:banshee-team/ppa in terminal, from the Banshee.fm website. but after running through terminal it says this: sudo add-apt-repository ppa:banshee-team/ppa You are about to add the following PPA to your system: PPA for Banshee Team This PPA contains the latest stable debs of Banshee for Ubuntu. To install Banshee, you must first enable the PPA on your system: 1. Open Software Sources (System->Administration->Software Sources) 2. Navigate to the "Third Party Sources" tab. 3. Click "Add" 4. Enter the APT line below that corresponds to your Ubuntu version that starts with "deb". 5. Click "Add Source" 6. Click "Close" 7. It will prompt you to reload your software cache. Click "Reload". 8. Now install the package "banshee" from Synaptic, or using the command below: sudo apt-get install banshee For those who wish to compile from trunk, add the deb-src line and then run "sudo apt-get build-dep" to install all required dependencies before starting to compile. Unstable (version which have odd minor version numbers) debs of Banshee can be found here: https://launchpad.net/~banshee-team/+archive/banshee-unstable More info: https://launchpad.net/~banshee-team/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.OPAjxemDQr --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 9D2C2E0A3C88DD807EC787D74874D3686E80C6B7 gpg: requesting key 6E80C6B7 from hkp server keyserver.ubuntu.com gpg: key 6E80C6B7: "Launchpad PPA for Banshee Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 I believe I have the ppa but, im not sure. I need a step by step process to get this, ive been trying to figure it out for quite a while now...

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Fsck stuck on "Clone Multiply-claimed blocks"

    - by user3436581
    Update: I fixed the issue. But I don't see eth0 directory in /sys/class/net Any idea how to fix that? I could not bring up eth0 and I need it badly so that I can backup everything over the network since I'm working on VM console. This virtual machine sda1 is stuck. I've tried e2fsck and fsck and both gets stuck after "Clone multiply-claimed blocls? yes" I've waited for around 5 to 8 hours and it still the same. I could not mount the filesystem without fixing these errors. I'm doing this after un-mounting all filesystems in rescue mode.. Reboot does not help. Any suggestions? Screenshot: http://i.stack.imgur.com/lgixr.jpg Alternative screenshot url: http://s27.postimg.org/grk4p9eeb/error.png

    Read the article

  • Problems Dual Boot

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed the steps of these tutorial: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-27

    - by Bob Rhubart
    Deploying OAM "correctly" | Chris Johnson fusionsecurity.blogspot.com Chris Johnson's concise blog post will help you to deploy Oracle Access Manager "for real." Oracle BPM: Suspend and alter process | Martijn van der Kamp www.nl.capgemini.com "There’s one tricky part with intervening in the run time behavior of a process, and that is compliance," says Martijn van der Kamp. "Make sure your solution covers the compliance regulations by the regulatory department, including the option of intervening in the process." Red Samurai Tool Announcement - MDS Cleaner V2.0 | Andrejus Baranovskis andrejusb.blogspot.com Oracle ACE Director Andrejus Baranovskis shares news about an upcoming free product for MDS administrators. Oracle bulk insert or select from Java with Eclipselink | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows you how to retrieve all the departments from the HR demo schema, add a new department, and do a multi insert. WebLogic Server Weekly for March 26th, 2012 | Steve Button blogs.oracle.com Steve Button share information on: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services. Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org May 14-15 - Cleveland, OH.More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May

    Read the article

  • Upgrade PHP to 5.3 in Ubuntu Server 8.04 with Plesk 9.5

    - by alcuadrado
    I have a dedicated server with Ubuntu 8.04, and really need to upgrade php to 5.3 version in order to deploy a new version of the system. This version of php is the default one in ubuntu 10.04, so I considered upgrading the OS, but after trying that, I lost my plesk installation, which annoyed my client. I tried adding the dotdeb.org repositories, but don't know why, after running an apt-get upgrade, I get this error: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: libapache2-mod-php5 php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-imap php5-mysql php5-sqlite php5-xsl 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. Any idea why is this happenning? Or do you know any alternative method (except compiling my own binaries) to upgrade php or update ubuntu without loosing plesk? Thanks!

    Read the article

  • A New Year’s Celebration in June

    - by Kristin Rose
    Happy Oracle New Year Everyone! Last week marked the official start to FY13 and we could not be more pleased with all that lies ahead this quarter, and all that we accomplished in the last…especially our newly updated Oracle PartnerNetwork (OPN) Solutions Catalog. If you thought it was great before, just wait until you see it now. We are ringing in our New Year right by fully equipping partners with the necessary tools they need to have another successful year. The Solutions Catalog will help draw attention to your partner services and offerings, highlighting your expertise. The Solutions Catalog is a centralized and easy way to navigate this customer friendly site. Some of the exciting advancements include: A streamlined search interface A robust lead capture tool that requests the contact information of potential customers A professional display of customer recommendations to showcase your skill set A partner dashboard with enhanced profile creation and an improved publication process Most exciting of all, updating your profile is easier than ever with the updated partner dashboard. Keeping your partner profile up to date will help to ensure customers are looking at the correct information about your company, and can easily stay on-top of any new developments or Specializations you receive. So don’t cut yourself short, be sure to update your profile today if you haven’t already done so. For more information on the exciting upgrades available to you, visit the ‘Resources for Partners’ page or watch Takane Aizeki, Principle Portal Manager at Oracle, walk through the upgraded Solutions Catalog and the different ways to showcase your value as an Oracle solution provider. Cheers,Lydia SmyersGroup Vice PresidentWWA&C and Communications

    Read the article

  • Vista machine hardwired cannot see xp laptop on wireless

    - by Kahega
    I have a laptop with windows xp, connected to my wireless router. I am trying to view the shared files on the laptop with my vista desktop computer. The desktop computer is hardwired to the same router. I can see the desktop pc fine on the xp laptop, and view the shared folders. I cannot see the xp laptop when on the desktop pc. I have tried installing the link update for xp, that is not the issue. I have also tried turning off firewalls, no dice. I have scoured google for this issue, and do not see any resolutions. Any suggestions would be appreciated.

    Read the article

  • Samsung 830 very slow benchmark numbers

    - by alekop
    I just bought a new SSD, and installed a fresh copy of Windows on it. I didn't see any noticeable difference in boot times, app start-up times, so I decided to benchmark it. Asus P7P55D-E Intel i5-760 Samsung 830 256GB SATA III Windows 7 Ultimate 64-bit The Windows experience index gave the drive a 7.3 rating, but real-world performance is not particularly impressive. Any ideas why the numbers are so low? UPDATE: It turns out that SATA III support is turned off by default on the P7P55D motherboard. After enabling it in BIOS (Tools - Level Up), the scores went up: Read Write Seq 325 183 4K 16 49 IOPS 32K 28K It's an improvement, but still far below what they should be for this drive.

    Read the article

  • Bug: Weird symbol in pdf generated from Indesign CS3

    - by Joe Yau Pong
    I recently encountered a weird bug in Adobe Acrobat. I generated a PDF from Indesign and some weird symbols in "Build relationships" appear out of no where. http://i.stack.imgur.com/FrIII.jpg Here is the image When I copy the words from PDF viewer and back to notepad, the words are correct: "Build relationshiops" Here are the my configurations: Mac OS 10.4 Indesign CS3 Acrobat 9 Pro I'm going to update my software to CS6 soon, but what seems to be the problem here? Any suggestion. Thanks very much to answer in advanced.

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >