Search Results

Search found 19157 results on 767 pages for 'shared folder'.

Page 378/767 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • C# Combining lines

    - by Mike
    Hey everybody, this is what I have going on. I have two text files. Umm lets call one A.txt and B.txt. A.txt is a config file that contains a bunch of folder names, only 1 listing per folder. B.txt is a directory listing that contains folders names and sizes. But B contains a bunch of listing not just 1 entry. What I need is if B, contains A. Take all lines in B that contain A and write it out as A|B|B|B ect.... So example: A.txt: Apple Orange Pear XBSj HEROE B.txt: Apple|3123123 Apple|3434 Orange|99999999 Orange|1234544 Pear|11 Pear|12 XBSJ|43949 XBSJ|43933 Result.txt: Apple|3123123|3434 Orange|99999999|1234544 Pear|11|12 XBSJ|43949|43933 This is what I had but it's not really doing what I needed. string[] combineconfig = File.ReadAllLines(@"C:\a.txt"); foreach (string ccline in combineconfig) { string[] readlines = File.ReadAllLines(@"C:\b.txt"); if (readlines.Contains(ccline)) { foreach (string rdlines in readlines) { string[] pslines = rdlines.Split('|'); File.AppendAllText(@"C:\result.txt", ccline + '|' + pslines[0]); } } I know realize it's not going to find the first "if" because it reads the entire line and cant find it. But i still believe my output file will not contain what I need.

    Read the article

  • Windows Home Server Passwords Do Not Match [closed]

    - by Ben Fulton
    I have a Windows Home Server that chunks along just fine most of the time. I've never bothered to put it on a GPS and so it's vulnerable to power outages that happen a few times a year. This most recent time, it came back and seemed to be fine, but whenever I try to access a shared folder I get "Passwords do not match". They matched before the power went out, and I couldn't update the WHS password since I apparently didn't know the old one. How do I fix this?

    Read the article

  • Windows Network copy and access denied randomly

    - by The King
    I have a windows 2008 R2 server and I now installed a new bigger HDD into it. I wanted to copy big AVI files to the new server hdd what is shared on the local network. I have write access to the servers hdd and I can successfully copy smaller files to it. But when I copy bigger files more than 500MB randomly on the copy I get Access Deny message. If I use RDP I can copy files through RDP client. I checked error messages at the server but I didn't found any error about this access deny. Because of RDP copy works I don't think that this could be hardware error. I think this is some kind of software setting error. Someone has faced this kind of error? Or somebody has idea what could cause or how to find the root of the problem?

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • I am using a regex snippet query string path

    - by Shelby Poston
    Using the following to load images base on two ids one is the and bookid and the out is the client. My folder structures is this. root path = flipbooks subfolders under flipbooks are books and clients in subfolder books I have and .net page title tablet. the tablet code behind checks the bookid of client and render a the tablet page with images in a flipbook fashion. because we have over 15000 records and flipbooks already created and stored in the database. I don't move the client folder under the books subfolders. I need the code below to get to the client subfolder in the query string and help to change this would be helpful. The result now is http://www.somewebsite.com/books/client/images/someimage1.jpg[^] I need the results to be http://www.somewebsite.com/client/images/someimage1.jpg[^]. I tried moving the tablet.aspx file to the root flipbooks and it works but i have provide a user name and password each time. This need to be access by the public and my root is protected. Don't want to have to change permission. I am trying to remove the /books function getParameterByName(name) { var results = RegExp('[?&]' + name + '=([^&]*)').exec(window.location.search); return results ? decodeURIComponent(results[1].replace(/\+/g, ' ')) : null; } Thanks Mission Critical

    Read the article

  • Establish direct cable connection between Windows 8 PCs in home network

    - by Marie. P.
    I'm running two PCs, a desktop and a laptop with Windows 8 Release Preview ("Build 8400"). They are connected to the same router in infrastructure mode, thereby having wireless internet. Due to often file synchronization between the machines I want to establish a cable connection that allows direct file transfer, without needing to use the wireless. When I plug in the cable (normal, not cross-over), I see in "Control Panel\Network and Internet\Network Connections": "Ethernet - unidentified Network" on both PCs. Transferring a file between both still only uses the WiFi via the Router. I noticed that when turning off the wifi on one PC, I can set up a shared internet connection that will work via Ethernet-cable, but since sometimes only one PC runs, sometimes the other one, I do not want to have the internet of one machine to be dependent on the other one being switched on. I do not have a crossover-cable, but since I did connect the PCs already successfully (just without both being on the internet), I'm sure that this should also work with a normal ethernet cable.

    Read the article

  • How can I unobstrusively backup a few client's email?

    - by tladuke
    This is a small office. Our web/email server is a shared host. In the office we have a windows 2008 box up all the time that runs our NAS and a couple other services. I don't have access to the ISP admin stuff, but I assume it has cpanel or something like that. I can get access if I ask. I want to get email backed up from the server to our NAS without the users having to do anything. I suppose I could set up Outlook on that server with everyone's account, but that's a terrible idea, maybe (would sent mail. The boss uses outlook, but we have Apple Mail and Thunderbird clients too. I guess the important thing is that outlook look at the backups, so boss is happy. Then again, maybe it should be stored in whatever is the most portable format (that will work on NTFS) This is for about 10 users.

    Read the article

  • How to log Windows server share connections?

    - by sbussinger
    Can anyone make a suggestion for the best way to log connections and disconnections from Windows workstations to a Windows Server 2003 file share? We're having some issues with workstations that have a drive mapped to a server that seem to work fine for awhile and then suddenly appear to get disconnected from the server (with files open). Needless to say this causes some data corruption and error messages. It would help me to troubleshoot the problem if we could somehow monitor and log the session connections and disconnections, to attempt to correlate the connectivity issues with what actions the user is taking at the time and what the server is doing. I just haven't been able to find a way to do this. Specifically I'm talking about the same information that is displayed in the Computer Management control panel applet in the "System Tools|Shared Folders|Sessions" page. Thanks!

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • Execute Bash script on Ubuntu from remote Windows machine?

    - by John Isaacks
    I have a bash script on a ubuntu 10.4 machine. It is shared and I can access it from my win7 machine with \\LINUX-SERVER\bash_repo\make-live However when I do, windows tries to open it. This is not what I want. I want to tell ubuntu to execute it. I am actually hoping to be able to build a GUI app on windows where the user clicks a button and it tells the bash script on the ubuntu machine to execute. Is any of this possible?

    Read the article

  • Why using swap file over a SMB/NFS mounted filesystem is not possible in Linux?

    - by Avio
    I'd like to use another machine's unused RAM as swapspace for my primary Linux installation. I was just curious about performance of network ramdisks compared to local (slow) mechanical hard disks. The swapfile is on a tmpfs mountpoint and is shared through samba. However, every time I try to issue: swapon /mnt/ramswap/swapfile I get: swapon: /mnt/ramswap/swapfile: swapon failed: Invalid argument and in dmesg I read: [ 9569.806483] swapon: swapfile has holes I've tried to allocate the swapfile with dd if=/dev/zero of=swapfile bs=1024 (but also =4096 and =1048576) and with truncate -s 2G (both followed by mkswap swapfile) but the result is always the same. In this post (dated back to 2002) someone says that using a swapfile over NFS/SMB is not possible in Linux. Is this statement still valid? And if yes, what is the reason of this choice and is there any workaround to have this working?

    Read the article

  • How do I delete folders in bash after successful copy (Mac OSX)?

    - by cohortq
    Hello! I recently created my first bash script, and I am having problems perfecting it's operation. I am trying to copy certain folders from one local drive, to a network drive. I am having the problem of deleting folders once they are copied over, well and also really verifying that they were copied over). Is there a better way to try to delete folders after rsync is done copying? I was trying to exclude the live tv buffer folder, but really, I can blow it away without consequence if need be. Any help would be great! thanks! #!/bin/bash network="CBS" useracct="tvcapture" thedate=$(date "+%m%d%Y") folderToBeMoved="/users/$useracct/Documents" newfoldername="/Volumes/Media/TV/$network/$thedate" ECHO "Network is $network" ECHO "date is $thedate" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername rsync -av $folderToBeMoved/"EyeTV Archive"/*.eyetv $newfoldername --exclude="Live TV Buffer.eyetv" # this fails when there is more than one *.eyetv folder if [ -d $newfoldername/*.eyetv ]; then #this deletes the contents of the directories find $folderToBeMoved/"EyeTV Archive"/*.eyetv \( ! -path $folderToBeMoved/"EyeTV Archive"/"Live TV Buffer.eyetv" \) -delete #remove empty directory find $folderToBeMoved/"EyeTV Archive"/*.eyetv -type d -exec rmdir {} \; fi

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • MVC multi page form losing session

    - by Bryan
    I have a multi-page form that's used to collect leads. There are multiple versions of the same form that we call campaigns. Some campaigns are 3 page forms, others are 2 pages, some are 1 page. They all share the same lead model and campaign controller, etc. There is 1 action for controlling the flow of the campaigns, and a separate action for submitting all the lead information into the database. I cannot reproduce this locally, and there are checks in place to ensure users can't skip pages. Session mode is InProc. This runs after every POST action which stores the values in session: protected override void OnActionExecuted(ActionExecutedContext filterContext) { base.OnActionExecuted(filterContext); if (this.Request.RequestType == System.Net.WebRequestMethods.Http.Post && this._Lead != null) ParentStore.Lead = this._Lead; } This is the Lead property within the controller: private Lead _Lead; /// <summary> /// Gets the session stored Lead model. /// </summary> /// <value>The Lead model stored in session.</value> protected Lead Lead { get { if (this._Lead == null) this._Lead = ParentStore.Lead; return this._Lead; } } ParentStore class: public static class ParentStore { internal static Lead Lead { get { return SessionStore.Get<Lead>(Constants.Session.Lead, new Lead()); } set { SessionStore.Set(Constants.Session.Lead, value); } } Campaign POST action: [HttpPost] public virtual ActionResult Campaign(Lead lead, string campaign, int page) { if (this.Session.IsNewSession) return RedirectToAction("Campaign", new { campaign = campaign, page = 0 }); if (ModelState.IsValid == false) return View(GetCampaignView(campaign, page), this.Lead); TrackLead(this.Lead, campaign, page, LeadType.Shared); return RedirectToAction("Campaign", new { campaign = campaign, page = ++page }); } The problem is occuring between the above action, and before the following Submit action executes: [HttpPost] public virtual ActionResult Submit(Lead lead, string campaign, int page) { if (this.Session.IsNewSession || this.Lead.Submitted || !this.LeadExists) return RedirectToAction("Campaign", new { campaign = campaign, page = 0 }); lead.AddCustomQuestions(); MergeLead(campaign, lead, this.AdditionalQuestionsType, false); if (ModelState.IsValid == false) return View(GetCampaignView(campaign, page), this.Lead); var sharedLead = this.Lead.ToSharedLead(Request.Form.ToQueryString(false)); //Error occurs here and sends me an email with whatever values are in the form collection. EAUtility.ProcessLeadProxy.SubmitSharedLead(sharedLead); this.Lead.Submitted = true; VisitorTracker.DisplayConfirmationPixel = true; TrackLead(this.Lead, campaign, page, LeadType.Shared); return RedirectToAction(this.ConfirmationView); } Every visitor to our site gets a unique GUID visitorID. But when these error occurs there is a different visitorID between the Campaign POST and the Submit POST. Because we track each form submission via the TrackLead() method during campaign and submit actions I can see session is being lost between calls, despite the OnActionExecuted firing after every POST and storing the form in session. So when there are errors, we get half the form under one visitorID and the remainder of the form under a different visitorID. Luckily we use a third party service which sends an API call every time a form value changes which uses it's own ID. These IDs are consistent between the first half of the form, and the remainder of the form, and the only way I can save the leads from the lost session issues. I should also note that this works fine 99% of the time. EDIT: I've modified my code to explicitly store my lead object in TempData and used the TempData.Keep() method to persist the object between subsequent requests. I've only deployed this behavior to 1 of my 3 sites but so far so good. I had also tried storing my lead objects in Session directly in the controller action i.e., Session.Add("lead", this._Lead); which uses HTTPSessionStateBase, attempting to circumvent the wrapper class, instead of HttpContext.Current.Session which uses HTTPSessionState. This modification made no difference on the issue, as expected.

    Read the article

  • rss downloader script

    - by The Digital Ninja
    I have a Synology NAS that is powered by linux at my house. I'm looking to set up a cron script to check a group of rss feeds and auto download new video podcasts to a shared folder. I can do most of the scripting, such as deleting files older than 3 weeks and the wget parts. But I'm not sure how to parse the rss feed and check dates to only grab the latest. I figured its best not to re-invent the wheel and surly someone out there has a command line rss downloader or some such script. Any ideas?

    Read the article

  • Cannot umount device is busy

    - by user132199
    Situation I am running a RHEL server via a VM on my laptop. I have a win7 desktop sharing out a folder and the VM on my laptop running RHEL6 has a CIFS windows mount at \mnt\win When I go to unmount the device I get a device is busy message. So I went to my laptop and check to see if there were any users connected to the share, since it listed none I turned off sharing. I went back to my RHEL instance and attempted another umount \mnt\win but received the same error. Question What are other alternatives to unmounting a shared drive?

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • Dealing with image upload on server

    - by user1073320
    I have got a the following problem: I have got multi-step form where in one step user upload image to server and then few steps further supplies other information, when this information is invalid no data should be commited - also the image should be deleted. I was thinking about PHP session, but I've read here PHP - Store Images in SESSION data? that it is inefficient way. Every time you proceed step in the form the image is reloaded (in the session) and as somebody mentioned "You will want it to only be as big as it needs to be and you need to delete it as soon as you don't need it because large pieces of information in the session will slow down the session startup." - here i got a question: will it slow down the stratup the session of user who upload file or sessions of all users? I have to mention that I'm looking for solution that doesn't rely on operating system scripts (cron or etc) - I have no permission to run such scripts. The perfect solution for me would be: saving image on disk (for example in some folder named after session id) then after the latest step of form move this image or delete depending on form validation. If user unexpectedly destroy the session (for example closing the browser) of course the folder with image should be deleted. In nutshell I need somethig like callback to event "destroying session".

    Read the article

  • Client-Side caching on IIS7 doesn't seem to work

    - by thomasbtv
    I have set content caching on a specific folder by following the local web.config method. I don't think it works, and I would like to fix this. I activate the cache using the IIS / HTTP Headers / Common headers feature. I set them to 1 day of expiration. I opened a page with Google Chrome in private navigation, and then open the Network tab in the console. The first time I load the page, everything loads from the site, obviously. If I refresh the page, I see 2 types of loading in the Network console: the files from Google and Facebook and such have a status of 200, and a size of (from cache). the files from the folder for which I set the caching have a status of 304 and their size is displayed. So, I guess the caching setting doesn't work? Or does the 304 response means that it's loaded from the cache? If they aren't, how can I make it work ? Thanks !

    Read the article

  • Source folders for a maven project in eclipse

    - by 4NDR01D3
    Hello all, I have a that uses maven... and I want to put it in my working environment with eclipse(Galileo)... the project is in a svn server, and I can create check out the project and everything looks OK. I even can run the unit test and everything is working there. However, now that everything is there I wanted to work in the code, and oh surprise there are no packages in my project... I mean all the source code is in the src folder and browsing through it i can see all my files, ut if I open the files from there, the files are opened as text files with no coloring, but worst no help at all about errors in compilation. I don't know what im I doing wrong now, because I had the same project in other machine and it was working well. So here is what I did, please let me know if you notice if I did something wrong, miss any steps or anything that can help me: In the SVN Repository (Using subclipse 1.6.10) I added my SVN Repository Browsed to the folder where I have the pom file Right Click Check out as a Maven project...(Using m2eclipse 0.10.020100209) Used the default options and finish. The projects were created with no problem. I said projects because this maven project has modules, and each module became a project in eclipse. Back in the java perspective, Right click in the project, Run as maven test(Using JWebUnitTest, because I am testing a servlet) BUILD SUCCESS!! But as I said there is not packages so I can't really develop in this environment. Any help?? Thanks!

    Read the article

  • Ubuntu 9.10 Dowload Speed Very Slow

    - by Don
    I'm running Ubuntu 9.10 desktop and I'm new to the Linux world, so bear with me. I'm on a corporate network of 3 T1s shared across 50-60 users. I typically get about 300 KB/sec for downloads, but for whatever reason, the Linux box will start out in that range, then drop to less than 1KB/Sec sometimes. Doesn't seen to matter where I'm downloading from. Right now I'm trying to get Eclipse for PHP and it's running at 3-6KB/sec. Getting the updates for the system will also drop to very slow rates. Our IT person has set up the machine to get the same 10.0.0.x address when it starts, and moved this IP to bypass our Proxy/Firewall going out, so that shouldn't be the issue. Can anyone recommend something I can try to better diagnose the problem. Again, I'm new to the Linux world and the hardware/OS setup side in general (coming form more of a coding background). Thanks for any advice.

    Read the article

  • Which is the best license for my Open Source project?

    - by coderex
    I am a web developer, and I don't have enough knowledge about software licenses. I wish to publish some of my works, and I need to select licenses for them. My software product is free of cost. But I have some restrictions on the distribution/modification of the code. It's free of cost (but donations are acceptable ;-)). The source code is freely available. You can use, customize or edit/remove code (as long as the basic nature of the software is not changed). You don't have any permission to change the product name. There are some libraries and classes which are in a folder caller "myname". You don't have the permission to rename "myname". You can contribute any additions or modifications to my project, to the original source repository (the contributors name/email/site link will be listed on the credit file). You can't remove the original author's name from the license. You can put the license file or license code anywhere in the project file or folder. You can redistribute this code as free or commercial software. :) Do you think all these restrictions are valid? Given these restrictions, which license should I use? Edit 1:- My main intention is to make the product more popular with free source code while ensuring the original author is not ignored. The product is open. Edit 2:- Thank you all, the above points are because of my lack of knowledge of license terms. You can help me to correct or remove some of the above points. What I'm basically looking for is in my Edit 1.

    Read the article

  • Can TFS workspaces be used without being tied to a specific machine?

    - by GWLlosa
    So I've got a situation where we have a project with 10 developers. Each developer, when they come in for the day, is randomly issued a machine to use for development that day. The machine names are different, say DEV01 - DEV10. At the time that they are issued to the developers, the machines are identical, and no changes the developers make during the day are persisted on the machines (source code changes are stored in TFS, not locally). These are of course actually virtual machines, but that's not really relevant to the point at hand. The problem is that each morning, the developers run into 3 issues: 1) The machine that they are assigned may not be the same machine they were last assigned to. For example, DevMan A might have used DEV04 yesterday, and received DEV06 today. His workspace definitions are now tied to DEV06; he must create a new workspace, or migrate the old workspace to DEV04. 2) The machine that they are assigned may have been in use yesterday, and some of the mappings may conflict. For example, DevMan A might have DEV04 today, and wish to create a workspace mapping the project folder to "C:\MyProj\Solution". However, DevMan B had DEV04 yesterday, and he used the same project folder. TFS now complains. 3) This may be the first time they are on a given machine. They now need to recreate for this machine all of their source-control mappings for the new machine. All of these issues can be resolved in a straightforward fashion on a case-by-case basis, but it does sap some productivity from the morning. We'd much prefer if the TFS workspace definitions could be 'relaxed', such that they did not include the machine name in the definition somehow. Barring that, if anyone is aware of a solution to the above problems that can run automatically, or with limited user intervention, that would also be ideal.

    Read the article

  • Display CPU usage separately (without root privileges)

    - by synaptik
    I need to display the CPU usage for each processing core on a single shared-memory 12-core (SMP) machine. I don't have access to install htop, else I would simply use that. I don't need fancy graphs or meters, though they would be nice. For example, simply displaying: X X X X X X X X X X X X where each X is the percentage utilization of 1 of the 12 processing cores on my machine. FYI: I know I can simply look at the utilization in "top" and divide that number by the number of cores on my machine, but I prefer a solution that shows each core separately.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >