Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 800/1030 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • How do I set up postfix to store e-mail in a file instead of relaying it?

    - by GomoX
    I want to run a staging copy of a production server on a local environment. The system runs a PHP application, which sends e-mail to customers in various scenarios and I want to make sure no e-mail is ever sent from the staging environment. I can tweak the code so it uses a dummy e-mail sender, but i'd like to run the exact same code as the production environment. I can use a different MTA (Postfix is just what we use in production), but I'd like something that is easy to set up under Debian/Ubuntu :) So, I'd like to set up the local Postfix install to store all e-mail in (one or more) files instead of relaying it. Actually, I don't really care how it's stored as long as it's feasible to check the e-mail that was sent. Even a set up option that tells postfix to keep the e-mail in the mail queue would work (I can purge the queue when I reload the staging server with a copy from production). I know this is possible, I just haven't found any good solution online for what seems like a fairly common need. Thanks!

    Read the article

  • Reverse web proxy with time constraints

    - by user2893458
    I have a web application which produces several unique URLs of the type http://service.company.com/service.html?type=aaaa&key=jfiZm6u6cW where the last part is a randomly generated key. Each such URL provides access to an instance of the service provided. I am looking for a way to restrict access to those URLs based on time constraints, as an example URL#1 should be available between 8:00AM and 10:00AM on May 30, URL#2 should be available between 10:30AM and 12:00PM on May 31, and so on. I already have a resource scheduling application based on Drupal and would like to find a way to include those URLs as scheduled resources. The web application is deployed on Apache Tomcat, so I don't have the knowledge or the resources to alter it, therefore I thought that I could put some sort of reverse proxy in front of the web app that could implement the time constraint feature. In my thoughts the reverse proxy would allow or disallow access to each URL based on the rules that my scheduling application would provide. There may be other ways to deliver such a solution, but I can't think of anything better, so the question is: is there a reverse web proxy architecture that could allow access to the destination URLs based on time and date rules? Any other ideas are more than welcome.

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • How do I view source in Outlook 2010?

    - by Martin Duys
    This question has already been asked here; but the answer only gives advice on how to view the email header not the actual html source of the email. There is another question here, which I think may be caused by the same issue as mine, but does not have a satisfactory answer (the answer does not work for the person who posted the question) If I right click on the bottom of an email I can see the option to 'view source' but when I select it nothing happens. I have done a bit of research and came across a post for a much earlier version of Outlook that suggested adding something into the registry. I applied this advice, but it made no difference; but I'm pretty sure that applying this solution correctly for my circumstances will do the trick. When I first received this machine it had a demo version of UltraEdit installed. I uninstalled UltraEdit and installed Notepad++ instead. I am convinced that there is a registry entry that is pointing to UltraEdit as the default view for 'view source' in my email and I need to replace this entry with a reference to Notepad or Notepad++, but I don't know how to do this. Any suggestions?

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • How to prevent an SSD from disappearing from BIOS

    - by Midimatt
    I've only recently upgraded my old machine to a new one with a brand new 60gb SSD as my boot drive and a 1TB main drive. Paranoid about completely breaking my SSD, I read up on a lot of issues that I needed to watch out for, including making sure AHCI was turned on and trim enabled. PC has been working fine for a few weeks now, until today. My wife was watching some TV on the machine when it started to act strange and eventually blue screened. She rebooted and the boot mgr was missing. When I got home from work I checked the BIOS and the drive had disappeared. I panicked and looked up some possible fixes, and I discovered a large amount of people having problems with the drive firmware, especially on OCZ Vertex and Agility drives, and my drive is an Agility 3 drive. The problems included blue screens followed by missing drives, and a solution was to reset the CMOS and try again. This worked, and now everything seems to be working fine. My question is, is there any way to prevent this from happening? Am I missing a setting for my SSD? All of the posts I found were from early to mid-2011 nothing for the end of 2011 to 2012. So I am wondering if I've missed anything. EDIT: Checked my drives firmware and it is 2.15, which has had issues reported by users.

    Read the article

  • Elevating UAC via .bat file?

    - by jslaker
    Pretty straightforward one that I'm having trouble finding an answer to. serverfault previously helped me with finding a way to automate Windows updates without using WSUS. It's working fantastically, but to run it over the network, you have to first mount a shared drive. That's pretty simple XP since you just mount the drive and run the updater. On Vista and W7, though, this all has to be done with elevated privileges to work correctly. The UAC account can't see network drives mounted by the regular user, so in order to get everything working, I have to mount the share via net use from an escalated shell. I'd like to automate mounting this share and launching the updater via a simple .bat file. I could probably just instruct everybody to right click "Run as Administrator" on the .bat file, but I'd like to keep things as simple as possible and have the .bat automatically prompt the user to escalate their privileges. Since these computers don't belong to us, I can't count on anything like Powershell being installed, so that rules any solution along those lines out and pretty much have to rely on things that would be included in an RTM Vista install. I'm hoping I'm mostly missing something obvious here. :)

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • New AD user request form and workflow

    - by user66390
    I'm wondering if anyone is providing a solid solution for creating New Network User Account Request forms, and attaching workflows to them to automate account creation? I'm currently investigating a number of options, but am surprised that such a ubiquitous task hasn't been solved a dozen times over and thoroughly documented. Or at least isn't integrated into current off-the-shelf change management and ticketing systems. Ideally, I'd like for our current ticketing system, ServiceDesk+ to present a standard 'New User' form to department heads, which they can fill in with the required new user details. This triggers a workflow that submits the request as a ticket that can be reviewed and actioned. Actioning the ticket triggers a workflow that creates a user in AD with the details provided, and notifies the department head upon completion. All told, a pretty standard requirement that I'm sure most organizations have. What are other people doing to accomplish this? Edit: I should add, I'm more looking for "supported" methods. As is, I've submitted a number of scripted solutions, none of which have met with manager approval.

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • Workaround for Dell "Power Supply Not Recognised" issue

    - by Haedrian
    So, I have a Dell Inspirion and the power supply port appears to be damaged. Basically when I plug it in I get a nice popup telling me that it couldn't detect that its a Dell power supply so it won't charge the battery and underclocks the system. It still works for other purposes (that is, giving power) I thought it was the actual power supply cable so I bought a new one, that worked for a while, provided I inserted it at JUST THE RIGHT angle. But now that's not working anymore, so I assume its the part which connects to the computer. The battery charging I can live without, the underclocking I can't. I'd like a way around this issue. Things I've tried: Updating the BIOS Replacing the power supply cable Inserting it at different angles Turning it off and on again Swearing at it Twisting it while inserting it So, is there a workaround somehow? I'd like to avoid taking out my soldering kit and risking permanently damaging expensive equipment if that's allright. I'm hoping for a software solution. Added: The exact model is a Del Inspirion N5010

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • VPN: Disable class based route addition for Windows XP/Vista

    - by brgsousa
    Paraphrasing this SuperUser link: When you set up a VPN, the Windows default is to enable "Use default gateway on remote network." A new default route is added to the routing table pointing to the remote network's gateway, and the existing default route has its metric increased to force all Internet traffic to traverse the tunnel and use the remote network's gateway. All traffic uses the VPN, and traffic destined for the outside world is directed to the remote gateway. When the VPN drops, the route to the remote gateway is removed and the original default route is set back to the original metric. Unchecking "use default gateway on remote network" means that new default route isn't added, so Internet traffic goes out the local gateway, but a new classful route is added to the routing table, using the local adapter's IP, pointing down the VPN. Only traffic destined for the classful network of the local adapter goes down the VPN. This may not be what you want. Checking "Disable class based route addition" means that classful route isn't added to your machine when the VPN starts up, and you'll need to add the appropriate routes for networks that should be routed through the tunnel. But, the option "Disable class based route addition" is available ONLY for Windows 7. How can I do something like that for Windows XP or Vista since they don't have that option? I have searched about for that but, no solution yet.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • Create taskbar shortcut to website in Window 7

    - by BJ292
    I'd like to create a shortcut to a website in Windows 7 on the taskbar that is not pinned to the default web browser. Currently if I drag the favicon from the left end of the firefox address bar to the Win 7 taskbar it will pin a shortcut to the firefox browser icon. Similarly if I create a shortcut on the desktop to a website and drag it to the taskbar it will also end up pinned to the firefox icon. The problem with this is to get to that shortcut I have to right click on the firefox icon and then select the pinned shortcut. That is workable for me but I want to do this for a child - so the shortcut needs to be right there on the taskbar as a stand-alone item. There is a workaround that pretty much solves the problem - create a new folder somewhere safe - create the shortcut to the website in the new folder - right click the taskbar and select toolbars - new toolbar - then browse to the folder you created and select it as the new toolbar. The contents of the folder will now appear on the taskbar as shortcuts. You need to drag it from the right hand end of the taskbar into the middle - turn off show titles and show text and make the icon large. I'd call this a 75% solution. Anyone know how to make a web shortcut that looks and operates just like any of the other shortcuts on the taskbar?

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Windows 7: resizing the 'Save As' window

    - by Mark Miller
    I do not know whether my question is appropriate for this forum. Apologies if it is not. I am running Windows 7 Professional with Service Pack 1 on a Dell Vostro 460 PC. I am downloading journal articles from the internet and saving them as *.pdf files. Somehow I unintentionally clicked a button that resulted in the 'Save as' window filling the entire screen of my computer, except for the toolbar at the very bottom. How can I resize the 'Save as' window so it only fills perhaps somewhere around 25% of the computer screen, or whatever the default size for that window is? I have searched the internet extensively and found one or two threads about this problem, but no specific solutions were posted there. One suggestion was to grab the bottom of the window with the mouse and scroll upward until the window was the desired size. That does not appear to be possible in my case. Another suggestion was to click on the window with the middle mouse button before attempting to resize the window, but that does not appear to help in my case either. Thank you for any help. If I should post this question in a different forum here please let me know, or kindly migrate the question to the appropriate forum. If additional information is necessary before a solution can be attempted, please let me know that as well.

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >