Search Results

Search found 6931 results on 278 pages for 'almost surely'.

Page 134/278 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • How Do Computers Work? [closed]

    - by Rob P.
    This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)

    Read the article

  • How can i compare Audio, what programming language should i use

    - by Pimmetje
    I have 2 audio files that are from almost the same source. But at some points there shifted a bit. Also the codecs does not match. I would like to make a program that takes a sample 2 - 4 seconds. And looks for it in the other file. (Most of the time it's not shifted more than 30 seconds). Than take the time and store it, Go ahead for a few seconds take a sample and find it again. This way i want to create a file where i can see on what points the file is shifted. For people who are more interested in what i want. I have a audio/video file speech and subtitles. But i have same speech from different sources with differs a bit in time. And i like to make a program that can correct the subtitle time for me. Enough about the problem I looked on the Internet for ways to compare audio files. Based on what i read comparing 2 audio files isn't that easy as i had hoped. Some talk about algorithms http://www.perlmonks.org/?node_id=169641 Some audio-library's portaudio.com aubio.org sourceforge.net/projects/ccaudio/ ambiera.com/irrklang/ The biggest problem i have is that i can't find something i can generate from the audio that i can use to compare with. I hope someone here can point me in the right direction.

    Read the article

  • Open Source Highlight: namebench

    - by eddraper
    DNS is a big deal.  Even small incremental changes to improve its performance can yield significant value due to the vast quantity of look-ups required when using the internet.  Until now, It’s always been one of those things I had to kinda take on faith… was my ISP doing a good job?  Are those public DNS server really that much faster?  What about security and privacy concerns? Let me introduce you to namebench.  This is the kinda tool I really love – one that immediately delivers value and is almost over-the-top OCD in its attention to detail. Trust me, this tool is utterly ruthless in it’s quest for getting it right – you’re not left with a big question mark after it presents its data.  The results are conclusive and actionable.  Here’s what is does: It hunts down the fastest DNS servers from your desktop that it can find using thousands of requests.  No, it doesn’t pop up this little dialog in 10 seconds to give you some “off the cuff” answer from a handful of providers.  It takes the better part of 10-15 minutes to run.  When it finishes, it presents you with a veritable horn-o-plenty of data.  Mean response duration, response distribution, bad data,  no stone is left un-turned. Check it out.  You’ll dig it.

    Read the article

  • How do I get debuild to put the binary in /usr/bin?

    - by SammySP
    I have been recently trying to package a small Python utility to put on my PPA and I've almost got it to work, but I'm having problems in making the package install the binary (a chmod +x Python script) under /usr/bin. Instead it installs under /. I have this directory structure - http://db.tt/0KhIYQL. My package Makefile is like so: TARGET=usr/bin/txtrevise make: chmod +x $(TARGET) install: cp -r $(TARGET) $(DESTDIR) I've used $(DESTDIR), as I understand it to place the file under the debian subdir when debuild is run. I have the txtrevise script, my executable, under usr/bin folder under the root of my package. I also have the Makefile and usr/bin/textrevise in my tarball: txtrevise_1.1.original.tar.gz. However when I build this and look inside of the Debian package, txtrevise is always at the root of the package instead of under usr/bin and will be installed to / instead of /usr/bin. How can I get debuild to put the script in the right place? Thanks. Any help would be greatly appreciated. I'm stumped.

    Read the article

  • Simple C: How do I scan this information in properly?

    - by Doc
    OK this is a simple question but for some reason I just can't get it right. I have to scan from a file hundreds of lines of code and store it in a array (which I can normally do a ok job with) however At one point the code will specify a number that then corresponds to the next batch of chars ints and floats going into various arrays. As I know I am not describing this correctly here is a example. one line of the file I am reading will contain something close to this. 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom the problem I am having is This part here 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom The rest after this is Generally the exact same however at this point the number at the front descides all the values that are going in. I am almost completely lost on how to write a way that can scan this and store the data into arrays correctly

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Oracle Virtual Desktop Infrastructure

    - by Fat Bloke
    A lot of the recent blog entries here have been about Oracle VM VirtualBox, possibly the coolest personal desktop virtualization product known to man. Deploying VirtualBox on your PC or Mac lets you run many virtual desktops at the same time to one user, you. But did you know that VirtualBox can also power an Enterprise-scale virtual desktop deployment too, delivering many desktops to many users?  As part of another Oracle product, Oracle Virtual Desktop Infrastructure (VDI), VirtualBox can run your Windows, Linux or Solaris desktops on servers located in the datacenter. Oracle VDI orchestrates the whole deal by looking after : creating or cloning the virtual desktops from a master template; managing the lifecycle of the desktops (create, start, suspend, resume, stop, delete); assigning which users get which desktops;  delivering easy and fast access to these virtual desktops from almost any device, such as existing PCs or Macs, iPads, or specially designed Sun Ray client devices too; load balancing and session management of all of this.  Architecturally the solution looks something like this: This is an increasingly hot area of the IT landscape, so the Fat Bloke has decided to create a new blog category (VDI) and dedicate a few blog entries to look into this in a bit more detail over the next few weeks. Watch this space... - FB 

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • A Myriad of Options

    - by Mark Hesse
    I am currently working with a customer that is close to outgrowing their Exadata X2-2 half rack in both compute and storage capacity.  The platform is used for one of their larger data warehouse applications and the move to Exadata almost two years ago has been a resounding success, forcing them to grow the platform sooner than anticipated. At a recent planning meeting, we started looking at the options for expansion and have developed five alternatives, all of which meet or exceed their growth requirements, yet have different pros and cons in terms of the impact to their production and test environments. The options include an in-rack upgrade to a full rack of Exadata using the recently released X3-2 platform (an option that even applies to an older V2 rack), multi-rack cabling the existing X2-2 to another full rack or half rack X2-2 (and utilizing both compute and storage capacity in the other rack), or simply adding a new X3-2 half rack (and taking advantage of the added compute and flash performance in the X3-2). While the decision is yet to be made, it had me thinking that one of the benefits of Exadata over a traditional database deployment is that when the time comes to expand the platform, there are a myriad of options.

    Read the article

  • Any good reason to open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Efficient Algorithm for Recording gameplay's objects positions

    - by Scorch
    So, I have a game idea in mind, and for that I need to record the game around the player. I'me not talking about recording it as video, but rather recording the scene objects, and their positions within the game, and then render them, giving the player the ability to go back and forth, to stop time and move around. I've made a prototype with some data structures in C#, since this is going to be the programming language we'll be using in our game, but if we want the player to be able to go back just five minutes back with the data of just 100 NPC's, it takes almost 1GB of RAM. Right now, I'm just storing a Doubly linked list, each item with the object position. In the game, I'll need to store even more data in each node, so I need something even more ligher. Of course, this algorithm is zero optimized, but still, that is a lot. The alternatives would be create the NPC's that aren't really important to the game when the user is viewing the past, but I don't really like it very much for the sake of realism. I wonder if there is a better way to store this? Thanks in advance, Scorch

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

  • Grub can not boot after resizing windows XP (NTFS) partition. What is to be done? [closed]

    - by cipricus
    Possible Duplicate: How to Repair Grub while dual booting ( win7 / ubuntu 11.10) I had installed Lubuntu on a PC with Windows XP and used dual boot for some time with no problems. Since I had almost abandoned Windows (kept it for printing...) I decided to resize its ntfs partition and add the free space to my Ubuntu space. Tried that with a gparted stick and a live cd but would not work due to an issue related to the ntfs partition: gparted signaled with a red exclamation point that there was a problem with that partition. I read that a checkdisk might solve it but in the end used EaseUS in Windows to shrink (resize) the ntfs partition and create a new one (ext3) from the space left. All seemed ok with that procedure: but resizing the partition and moving the data might have affected the grub file: or whatever the following message means, which I get when trying to start my pc: error: file not found grub rescue> Booting from a live cd I see, beside the shrinked windows partition and my old linux one, the newly created partition, containing a directory called lost+found that I cannot open. Can I fix the grub file and recover both my XP and Lubuntu installations?

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Dell Inspiron7520 and ubntu 12.04 issues

    - by user91358
    I have a DELL Inspiron 7520 in the highest configuration: 3rd Generation Intel® Core™ i7-3612QM processor (6M Cache, up to 3.1 GHz) 15.6" Full High Definition (1080p) LED Display 8GB3 Dual Channel DDR3 SDRAM at 1600MHz 1TB 5400RPM SATA HDD + 32GB mSATA SSD w/Intel Smart Response Blu-ray Disc (BD) Combo (Reads BD and Writes to DVD/CD) AMD Radeon™ HD 7730M 2GB 6.09 lbs and I have installed Ubuntu 12.04 few days ago and I'm facing some issues: 1) sometimes the whole ntb freezes and I have to hold power button for 5 secs to shut it down. I think it is something with VGA and connected external monitor. I have read somewhere that it is already a reported bug, but what I am not sure about that it is doing sporadically. Sometimes it freezes right after I log in, sometime I ran few hours and then it freezes. I am using those proprietary drivers but I wasn't been able to install those with updates. 2) the next issue is the fan is quite noisy even when the ntb is almost Idle. (max 10% CPU usage). Can you recommend me some software which could do this power management to lower the noise? I have tried CPU frequency scaling indicator, but it seems that it has not any effects. 3) and issue no. 3: when I want to log out, restart or shutdown using the menu in upper right corner the upper and left trays disappear, but programs are still running and they won't close to complete log out or shutting down the OS. When I use the CLI command, it works fine. Thanks for any help you can provide.

    Read the article

  • Ubuntu automatic logout whenever I execute exe files

    - by KeepTrying
    I have a problem. Here's the thing. There were 4 partitions in my hard drive: One for ubuntu root folder One for ubuntu home folder One for general stuffs like music, movies... And the last one for SWAP To install Windows 7, I resized partitions and moving the order of partitions by using GParted. I moved all of the ext formatted partitions to the left, so that means the spare space would be at the right. And I formatted that spare space in NTFS and install windows 7. After successfully installing windows 7, I used LiveUSB to fix grub. I installed Boot Repair and, with just one click, now I can dual boot ubuntu and windows 7. But, the point, because of changing the order of partitions, especially the partition consisting of home folder, I couldn't log in the ubuntu. I used recovery mode and changed file /etc/passwd. Everything almost got back to normal except one thing. The windows apps that I installed via wine don't work anymore. I run them via accessing menu Applications/Wine/Programs but nothing loads. One more thing, when I double click on exe files to run them, ubuntu suddenly log outs. Thank you for reading my post, it's quite long and my English is fairly poor. I'd appreciate for anyone who reads it.

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • How could there still not be a mysqldb module for Python 3? [closed]

    - by itsadok
    This SO question is now more than two years old. MySQL is an incredibly popular database engine, Python is an incredibly popular programming language, and Python 3 has been officially released two years ago, and was available even before that. What's more, the whole mysqldb module is just a layer translating Python's db-api to MySQL's API. It's not that big of a library. I must be missing something here. How come almost* nobody in the entire open source community has spent the (I'm guessing) two weeks it takes to port this lib? Is Python 3 that unpopular? Is the combination of python and mysql not as common as I assume? Or maybe it's just a lot harder to port mysqldb than I assume? Anyone know the inside story on this? * Now I see that this guy has done it, which takes some of the wind out of my question, but it still seems to little and too late to make sense. EDIT: OK, I'm aware that the stock answers for these kind of questions cover this one as well. Patches welcome, scratch your itch, we don't work for you and we don't have the time, etc. I actually took a shot at porting this about a year ago, but it was my first time doing anything with Python C extensions, and I failed. My point in writing this was not a plea for somebody to write it, but genuine curiosity: it seems that some much more complicated libraries have been ported to python 3 already, and in the poll for which libraries should be ported, mysqldb is not even nominated! That suggests that maybe (2) is the right answer. UPDATE: I found that there are several new libraries that provide mysql support under Python 3, I just wasn't googling hard enough. That explains everything.

    Read the article

  • One of my VMs went boom using Virtual Box and how it got fixed

    - by Enrique Lima
    I am running an HP Envy 15, 16GB and 500GB (7200 RPM) Hard drive. Had a VM configured from another environment, created the virtual machine config file on Virtual Box, everything seemed ok. Fired it up, and it was  s   l   o   w, it took close to 10 minutes for it to load, and about 5 more to see Windows was in the process of loading before the BSOD.  Thought, maybe, just maybe it will not happen again … oh was I wrong. Frustration had already hit an all time high with this configuration and the number of issues I’ve had. How I did the troubleshooting … The best thing to do (IMO) is to step back, and gather your tools to debug this situation. Tools:  Virtual Box command line tools, Windows Debug. Virtual Box comes with a pretty good set of tools to examine, migrate and overall tasks to deal with VMs. The firs step:  use VBoxManage to prevent the VM from rebooting after the error to get enough time to really dig into the BSOD issue. Command used:   VBoxManage setextradata VMNAME "VBoxInternal/PDM/HaltOnReset" 1 Once this was done, the error reported was an “Inaccessible boot device” coming from a “Stop – 7B” type of error on the BSOD. The issue I had with this, my VM was configured to use a virtual SATA controller, and thought Windows 2008 R2 would handle this fine … again wrong!  Because the integration tools from the other product where wanting to take effect that was throwing everything off. The fix The fix was almost handed to me, edited the configuration for the VM, removed the SATA controller from it, added the virtual hard drive under an IDE controller, boot up and voilà … it works! I was then able to install the Virtual Box guest tools and such, but have decided to favor “keep on working” over “let’s try SATA again”

    Read the article

  • cookie not being sent when requesting JS

    - by Mala
    I host a webservice, and provide my members with a Javascript bookmarklet, which loads a JS sript from my server. However, clients must be logged in, in order to receive the JS script. This works for almost everybody. However, some users on setups (i.e. browser/OS) that are known to work for other people have the following problem: when they request the script via the javascript bookmarklet from my server, their cookie from my server does not get included with the request, and as such they are always "not authenticated". I'm making the request in the following way: var myScript = eltCreate('script'); myScript.setAttribute('src','http://myserver.com/script'); document.body.appendChild(myScript); In a fit of confused desperation, I changed the script page to simply output "My cookie has [x] elements" where [x] is count($_COOKIE). If this extremely small subset of users requests the script via the normal method, the message reads "My cookie has 0 elements". When they access the URL directly in their browser, the message reads "My cookie has 7 elements". What on earth could be going on?!

    Read the article

  • How to fix GRUB on dualboot with Windows7 and Ubuntu?

    - by b_oliv
    I am a relatively recent user of Linux. I had several releases of Ubuntu installed on my laptop working in dual-boot and never had any issues. Recently, I installed openSUSE because I thought it would be necessary for an assignment at my university. It turns out it wasn't so I returned to Ubuntu and decided to burn the new .iso to a CD and install it. The problem is that during installation process I almost for sure messed up with the partitions and now, whenever I tried to load Windows 7, it will tells me that a required device is inaccessible. So, I reinstalled Ubuntu again and now all I get is that I am redirected to the GRUB menu without any warnings. I tried creating a Windows Recovery Disk but it gives me Unexpected I/O error. I suspect it is because it was downloaded from the Internet and maybe some files weren't there. I tried everything without success, so I decided to ask here, in the hope I can receive some help and also learn how to help others with it in the future. Here it is my boot info summary: http://paste.ubuntu.com/1344990/ Also, I might add, that on the boot-repair advanced options, the box repair Windows boot files is "locked", so I can't check it. EDIT: Apparentely, the box is locked, because, from what I understood after reading the boot-repair information, everything is fine with my windows boot-files... I still need some guidance though

    Read the article

  • Enable [command] key to register as something other than just [ctrl]?

    - by gojomo
    I'm running 10.04LTS inside VMWare Fusion on a Mac. The [command] key (aka [windows] on many keyboards) is almost always behaving as if it was [ctrl], even though I done anything explicit to request that behavior. In fact, in SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior, 'default' is chosen (rather than the 'Control is mapped to Win keys' option). However, choosing other options there do not seem to change the handling of [command], at least not as tested in the SystemPreferenceKeyboard Shortcuts app. (No matter what I've tried, [command]-x is always detected as [Ctrl]-x in that app.) I've tried: various options under SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior toggling the VMWare Fusion Preferences KKeyboard & Mouse Key Mappings setup which claims to map '[command]' to '[windows]', and restarting the VM in each position the xmodmap lines suggested at https://help.ubuntu.com/community/MappingWindowsKey And yet, it's clear that all Ubuntu apps aren't merging [ctrl] and [command], because in 'Terminal', [shift]-[ctrl]-c will Copy, but [shift]-[command]-c will not. If the [command]/[windows] key was recognized as anything else ('Super', 'Meta', 'Hyper'? I don't care as long as it's not 'Control'), then I could achieve my real goal (which happens to be enabling CMD-based cut/copy/paste in PyCharm, while leaving CTRL-X/etc available for emacs-like bindings). I think any solution which manages to make [command]-x appear as something other than [ctrl]-x in PreferencesKeyboard Shortcuts will probably do the trick.

    Read the article

  • Ubuntu audio mysteriously stopped working (12.04)

    - by Laika
    Well, I've been a user of Ubuntu 12.04 LTS since April now, and it's been a very pleasant experience. I'm a big fan of electronic music, and I tend to have my tracks playing in the background while I do things on my laptop, either in YouTube or in Clementine, my default music player. All has worked very well until now. A couple of days ago my entire PC started to lag really badly. Almost everything was unusable. I opened up System Monitor via the terminal to find a process called "pulseaudio" using nearly 1GB of RAM and over 80% of my CPU. I needed to get some important work done and so I killed the process without thinking. Once again today, pulseaudio decided to lag the hell out of my PC, and so I killed it again. Nothing seemed to happen immediately, but once I opened up YouTube all the audio on videos stuttered a lot, while the videos played smoothly. I restarted Firefox to find that the audio was now not working at all, with both headphones and speakers, and the volume up quite a bit (it's not muted, I've checked that!). A little bit of research later and I've discovered that pulseaudio plays an important part in Ubuntu's audio. Even after restarting my PC the audio still ceases to work in any applications or with any output. The pulseaudio process refuses to start up again. So, can you help me out here? What can I do to fix my problem, and why was pulseaudio doing this in the first place?

    Read the article

  • external usb hard drive is not being seen anymore

    - by incrediblehulk
    I think my problem is a little different than several other similar titled questions. Everything started while I was using 10.10. External drive was always recognized and mounted, but the timing of this differed. I mean when I booted, the OS sometimes saw the drive immediately, sometimes after a few minutes, sometimes after hours. Although this was annoying, I tolerated this somehow. Then this problem persisted after I upgraded to 11.04 with a clean install. Afterwards, the drive became totally invisible to the OS. It is not even detected as an usb device anymore. However, there is one thing I can do to make it seen. If I boot to another operating system which can detect the drive, and then boot back to ubuntu, everything is perfect. but this is of course very impractical. To summarize, the usb drive is recognized by ubuntu if and only if another OS in the same computer could recognize it first. I should also say I have not had any problems with the same drive in any other OS or a different computer. My drive is philips with a hitachi hdd inside, has its own power source and any other usb powered drives have never caused this kind of a problem. I've tried almost all recommendations in similar topics but none of them seems to be related to this one. What can I possibly do to fix this?

    Read the article

  • How to synchronize a whole Ubuntu?

    - by Avio
    I think that the time is ripe to have my whole Ubuntu synchronized just as my Dropbox folder is. Given that we are always talking about files and directories, what's the difference between my Documents folder and my /usr system directory? Almost none, except for their location. In fact, I think that there is just one big issue that prevents people to have their beloved installations mirrored wherever they go: symlinks. Dropbox, Google Drive, Ubuntu One, Sugarsync, Skydrive, none of these services support symlinking. This means that if I push a symlink in one of the synced folders, locally the symlink is kept as is, but remotely (in the cloud or on the other synced machines) the symlink is resolved to the actual file that was originally pointed to. This completely disrupts Linux installations, thus these services can't be used for this purpose. So the question is. Does anybody knows a way to achieve this? A whole Ubuntu, always synchronized with a remote running copy, but still locally stored on both disks? My best guess is that I could use NFS. But the main difference between Dropbox and NFS is that NFS is a remote filesystem that always forces to remotely access the files, while Dropbox pushes modifcations to local filesystems (and thus would perform better). I've also heard about NFS caching. Does anybody knows if this solution could approximate Dropbox in this sense? P.s. I know that /boot, /dev, /proc, /run, /tmp and device-specific mountpoints in /mnt and /media will have to be left out the sync mechanism. What I'm interested in is the principle. Can this be done with reasonable performance, having reasonable resources (e.g. ~ 1Mbps upload bandwidth and a public IP address)?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >