Search Results

Search found 6058 results on 243 pages for 'short film'.

Page 170/243 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Shortcut To Full Screen App In Lion

    - by omghai2u
    I postponed getting OSX Lion for as long as I possibly could. Now that I have it, I'm having lots of difficulties getting it to perform how I want. On Snow Leopard my typical setup for working was 4 spaces. I'd keep a Windows VM open on Space #4 full-screened, a Linux open on space #3, and I'd do other stuff on spaces #1 and #2. My keyboard shortcut allowed me to switch between my Windows work (Command + 4) to my Linux work (Command + 3) very quickly, and without the need for my hands to leave the keyboard (or effectively to even quit typing). Productivity was good. I see that on Lion a full-screened VM (and yes, they need to be full screened, Fusion's Unity won't cut it for what I need to do) is its own separate Desktop. I have set up 4 desktops and made my keyboard shortcuts to move between them Command + # just as before. But how do I get my full-screened VM to be one of those already existing desktops? Or, rather, how do I make a short-cut for the full-screened app?

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Recover data from hard drive with partitions (but not most data) overwritten

    - by Macha
    I have a 500GB hard drive I've been keeping around to recover data from that I removed from a failing NAS drive that got sort of... erratic at the end. I finally got rid of the NAS when during a firmware update it removed the partition table. Fast forward to a week ago, when I was building a new PC, and a mixup resulted in me placing the hard drive in question in the new PC and installing Windows XP on the first 100GB. I'm presuming any data on that first 100GB is now gone, but for the rest of it, is there any way I can recover it at home, as professional data recovery is currently too expensive? I have a blank 1TB HDD if I can store any images of that hard drive on. The problem was definitely with the NAS and not the hard drive, as the hard drive had a successful install of Windows when mistakenly place in the new PC, and there were capacitors in the NAS's circuitry clearly broken. The data I want to recover (in order of priority) is: High: Some jpgs of family photos. Medium: Some RAW files. (There are also jpg versions of all of these) Low: Some mp3s, avis and ISOs, I can re-rip most of these if need be, but it'd be handy not to have to. (I don't need a backup lecture, and if you can hold it in from nagging Jeff Atwood for it, you can hold it in from nagging me for it) In short: The partition tables are gone and overwritten. The data is not overwritten, except for an amount equal to the size of a Windows XP SP3 installation.

    Read the article

  • Suspected Corrupted Windows 7 MBR?

    - by AridDecay
    So, this may not be the correct place to put up my question, but i'll give it a shot. I'm having an issue repairing this computer. It was brought to me with the described issue of 'Not turning on' Later, I found that it would come with the error of 'No boot sector found on internal hard drive.' I assumed it was an MBR issue due to a virus or cutting a Windows update short. I booted into my trusty recovery enviroment and ran bootrec.exe /FIXMBR and restarted -- No luck I started to think (After multiple attempts to get the MBR sorted out, including creating a new boot sector) that the Hard-drive was possibly starting to cave in on itself, so I booted into a linux bootable CD and went to check the SMART data. Odd, say's it's inaccessible. That seems odd to me, considering it's a newer (two years old or so) Windows 7 computer. All new Hard-drives have SMART. So, I checked the BIOS. No mention of SMART anywhere. Greaaaat. I decided as a last-ditch effort to switch the hard-drive type to ATA in the BIOS (God knows why, I was getting frusterated) instead of AHCI. VOILA! It actually attempts to boot, gets halfway through the little windows animation, does an incredibly (Half a second) quick BSOD, and shuts down. Does anyone have ideas on what's going on here? I'm at my wits end.

    Read the article

  • W2003StdR2 server: DNS dysfunctional!

    - by Tor
    I hate to have to do this, but i feel up that creek with no... well, some of you might know. At the moment my one and only DNS server refuses to do Forwarding. The story is as goes: This site had 2 servers, one W2003SBS and an W2003StdR2. The SBS degraded over a short periode of time, and to not go down with it i decided to move all data over to the other server. This was of course an AD integrated site. Move went ok, the Std server removed from the domain, and the SBS put to rest. For the time being we decided to run the Std as a server only, and no AD. We renamed the internal domain to xxx.local, and set the server up with DNS, DHCP and installed WINS (not activated). Forwarding of DNS is to our ISP through a Netgear Firewall. The same address setup used as before. So - DNS server started and all went ok, clients reconfigured and hooked up and then - after a day's time - internet name resolution stopped working on the server! Nothing had changed, been altered, modified, nothing! What i now get when doing NSLOOKUP is just a 2 sec timeout response! And i have checked and looked, but to no avail. Anybody seen this behaviour before? And yes - ALL servicepacks have been updated on the server. I would be much obliged if anyone in here could lend an ear... and give advice! Thanks.... from Tor in Norway Today is the 14th, and i still have no resolution to this nagging problem. Anybody else got any advice in the matter? Please?

    Read the article

  • OS X Application icons missing after filesystem tampering

    - by dylan
    A while back I installed some Google software, I forget what it was, but it was something that attached itself to Google Chrome and was un-uninstallable short of uninstalling all of Chrome. It was a total pain in the a$$. Anyways, I was trying to put up a fight before going nuclear and just wiping Chrome, and during that fight, something went wrong (i.e., I didn't know what I was doing). I remember messing around with the Applications folder somehow. I don't remember what exactly I did, but it perhaps involved some interchanging between the /Applications and /User/Username/Applications folders? Definitely some tampering in those areas, I'm sure at least about that. Anyways, I've since updated and restarted my computer. Now, while all my apps work fine, many of them have blank icons - not on the Dock, those icons are fine, but in Spotlight search results, etc. Currently, my /Applications folder contains only OS X-cooked-in applications. My ~/Applications folder contains cooked in apps AND post-OS/third-party apps. There hasn't been any functionality deficit so far, but I don't like working on a machine where something isn't how it should be. I know my information was vague, but does anyone know what went wrong / how to fix it? OS X Mavericks 10.9.3

    Read the article

  • Windows 7 boot animation slows down startup by default?

    - by kngofwrld
    I just upgraded my HDD to an SSD drive. I am running a completely fresh install and enjoy the short boot time. I tweaked the startup to be as fast as I could by removing unneeded apps and such. Nor am I running a solid desktop background (which causes a 30-sec startup delay). I have a 2.1ghz 64 bit laptop with 4 gigs of ram, so it's not a liquid-cooled speed monster, but I checked some super high end PC boot vids on YouTube and noticed that they startup in almost the same time as my machine. I also noticed that the glowing Windows 7 animation plays all the way no matter how fast the PC is. I turned off the animation, and the startup time is unchanged. I turned on verbose startup info and noticed that it runs until the very end, where it looks like it just sits there for no reason waiting for something to happen for a few seconds. So now I think that the Windows 7 startup animation has a timer built into it that forces the computer to wait for no other reason than to play the full animation. Super-fast XP boot vids on YouTube seem to start much faster (and not just because they "have less to load"). Am I imagining things? My question is: How can I turn off not just the animation, but the timer for the animation. Here is a vid that tipped me off, I have no relation to the poster. (warning: soundtrack might be loud) http://www.youtube.com/watch?v=T5LkX3xejJ4

    Read the article

  • Problem creating ODBC connection to SQL Server 2008 with Vista

    - by earlz
    Well, I'm trying to get a database schema thing working, first I tried just doing it in Linux where I'm more comfortable, but ODBC seems to be a hack there and I couldn't get it to work. So I figured it shouldn't be too hard in Windows.. Ok, so I created a SQL Server Client Alias so that I can simply same windowsserver to refer to my SQL server. Then, I went to the ODBC configuration in Control Panel. I clicked Add in the User DSN section. I chose Native SQL Server (10), and then clicked next. Then I typed a short name and a description and gave the servername as windowsserver/SQLEXPRESS Then, I click next, give it my user name and password and click next. Then, after like 2 minutes it says "Login Timeout Expired" What can be wrong here? I know the server is configured cause I have SQL Server Management Studio opened up with that server in it. I'm also just trying to connect over regular TCP/IP and my firewall is disabled.

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Simulate a DFS share for a user not on domain with a folder in path

    - by user223655
    I have a consultant whose computer is not on the domain and needs to access various network resources. Unfortunately while adding a computer to the domain is a difficult bureaucratic process (and would disallow much of his development software from even running given the domain restrictions), we can allow him to have credentials to access network resources. As such, he accesses various network resources via NET USE etc. without using DFS. There is one piece of software which requires him to have the same hardcoded path as other domain users but that path is a DFS path which he can't map (i.e., the software checks the path at runtime and will only run if it matches the registered path and will reject it in the context of using a DFS versus conventional machine path) I was wondering if there's some method to simulate the DFS path without actually using DFS. e.g., the path the software needs to see is "\ABC\DFS\software\app.exe" whereas the non DFS path is "\DEF\Software\app.exe" while I could make his hosts file point DEF to ABC, I'm not sure if I can somehow make it point there with the DFS "folder" as well are there any methods for this short of making changes to the AD to allow him to use DFS or add him to the domain (both of which are politically/technically challenging sadly)? Thanks guys

    Read the article

  • Proxmox: VMs and different public IPs

    - by Raj
    I have a server which has two NICs and both are directly connected to internet. I have five different public IP addresses available for the VMs. The host machine (Proxmox) doesn't need to use any (it'll use a private IP and that's all) but will have internet connection. I've gone through the Proxmox documentation and I'm not able to understand the big picture to set up the right network configuration for my needs. In short, what I have is: One server (Proxmox, host machine) On that server, 5 VMs are created 5 public IP addresses available (one for each VM), let's say: 80.123.21.1, 80.123.21.2, 80.123.21.3, 80.123.21.4, 80.123.21.5 What I have now for the host is the following: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.101 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 auto vmbr1 iface vmbr1 inet manual It can be reached from the internal network, so that's OK. It has internet connection, which is also OK. vmbr1 is going to be used by the VMs. Each VM will have its own IP on his network interfaces configuration file. For some reason, VMs will not have internet and they won't be able to have public IP address. If I use NAT, it will work correctly, but they will not use the public allocated IP addresses for them. Am I missing something?

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Howto align partitions in Linux + NetApp

    - by santisaez
    NetApp support has suggested us aligning partitions to improve performance, in short: starting sector must be divisible by 8. How can I move the start point in a misaligned partition -in production, with ext3- under Linux? A screenshot with a misaligned (start=63s) and aligned (start=64s) partition is available at: http://filesocial.com/lkwvvn2 (If anyone is interested in this topic, NetApp has a good document explaining performance issues in misaligned partitions, search for "tr-3747": Best Practices for File System Alignment in Virtual Environments.) I have tried using parted "resize + move" commands, but when moving start point a get this error: (parted) resize Partition number? 1 Start? [64s]? End? [419425019s]? 419425018 (parted) move Partition number? 1 Start? 65 End? [419425019s]? 419425019 Error: Can't move a partition onto itself. Try using resize, perhaps? Using fdisk 'b' command in expert mode ('move beginning of data in a partition') works, but it doesn't move the file system.. thanks!!

    Read the article

  • Batch-Remove Specific Text from Photo Descriptions

    - by David
    I've recently upgraded to iPhoto '11 (couldn't resist the pricing on the new app store) and as I'm adding more meta-data to my library and generally organizing things (places, faces, etc... I hadn't upgraded since '08) I've noticed something odd in my photos. Every photo in my library has a description (though many are short), but it would appear that somehow the description of one of the photos has been appended to many. I don't know if maybe I accidentally screwed up a batch change at some point in the past, or if the library upgrade somehow messed up, or what else may have happened. But what I need to do is fix these somehow. Now, manually editing is something of a daunting task. Within a library of 21,248 photos, 18,858 of them have this additional text. The one thing I do have going for me is that it's a specific string. If there's a way to "remove this string from everywhere in the library without removing the rest of any given description" then that would be perfect. Is there anything I can do like this? Maybe even manually editing a library file in a text editor? (Would that break anything else in iPhoto if its library was edited outside of the application, even while it's not running?) Does anybody have any ideas?

    Read the article

  • What LPR arguments do I need to print a 1400x800 pixel image on a 4x6 label?

    - by Nick
    This is driving me nuts. UPS sends our system a 1400x800 GIF image of a shipping label, which is supposed to fit nicely on a 4x6 page. Unfortunately, I can't seem to get the command line options right to make it happen. We're using an Eltron/Zebra 2844 with a network adapter, and printing from our Ubuntu 8.04 server using CUPS. We're using the correct drivers, and test pages print correctly. No matter what I try though, it insists on printing the UPS labels accross 6 pages, with a little bit of the label on each page, or way too small. I've tried a bazillion different lpr settings, most of them producing garbage. The closest I've gotten is this: lpr -P Eltron2844 -o natural-scaling=55 -o page-right=0 -o page-left=0 -o landscape -o media="4x6" ./1ZY437560399620027.gif but it causes the image to be too small on the page. It's about an inch too short, and there's a 1/2" margin on both sides. If I bump the scale up to 56, it explodes the image onto two pages, and squashes it. Any ideas?

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Intel Motherboard Lightning Victim Dies Hard

    - by Stetson RDT
    Today, I have a more hardware-related question. I have an Intel board, and I really do not know which board it is, I built the machine for a relative, but he forgot to keep the documentation. Long story short, the computer was disconnected during a lightning storm, but a lightning strike travelled in via the ethernet cable (It was directly connected to a power brick commonly seen on those long distance ISP Wireless transmitters), and the motherboard was shocked. I am attempting to get this PC going. The problem is as follows: The computer will randomly reboot, just in the middle of anything as it pleases. May load to EFI (or whatever BIOS is nowadays), may load to bootloader, may even get to the OS. But before 5 minutes is up, the system will always die. Out of curiosity, I plugged my voltmeter in to a molex connector. On the 5V side, it gets a good, consistant +5.13V. On the 12V side, it fluctuates, as follows: Upon immediate startup, it soars to 12.11-12.13V. It will now do one of two things: it will immediately jump down to 12.04-12.05V, or hover for about a minute at 12.11-12.13, then jump down. It seems the longer the voltage stays at 12.11-12.13, the shorter the machine will stay running. Also, post codes, whenever the machine locks up, but does not die hard, seem to be between "AA" and "AC". Does this make any sense to anybody? Do you all think this motherboard is salvageable? It was an expensive bugger, and I'd prefer to not replace it.

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >