Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 351/813 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How long will a "safely stored" Solid-State-Drive (SSD) keep its data? (e.g. bank safety-deposit box)

    - by user31575
    Here's my usecase: once-and-only-once copy off photos/videos to an internal SATA Solid State Drive (SSD) put this drive in a well-ventilated, air-conditioned bank "safety deposit box" for safe keeping The question: How long can I safely store a solid-state-drive in such an environment? i.e. 0% bitrot, 100% success when "plugged in" Are some SSD drives more reliable than other for this usecase? (e.g. smaller size vs larger size, SLC vs MLC, different brands, etc) More fodder: I have read that solid state memory cards (e..g compactflash, or sd cards) have much longer durability than other media (DVD's, CD's, hard drives) for this usecase (guaranteed against bitrot/other dysfunction on the order of ~ a decades vs a year ). I don't know if this applies to "SSD hard drives". Copying to one 500Gb ssd vs 8 64gb flash drives is easier SSD SATA hard drives have no moving parts, but they have more "visible electronics" than a compact flash card. I don't know if this "visible electronics" can fail, i.e. in contr I know many will point to carbonite, other cloud backup stuff, but I like the simplicity of having physical copies and wanted to understand the risks/implications thanks,

    Read the article

  • Can't see more than the first few lines in an SSH connection

    - by hello
    I need some help for SSH buffer size. I have a vista machine at home and i have installed "Free SSHD" on it. I also have Dynamic DNS setup to access some of my home lab equipment which are connected to this vista machine. From my work machine which is an XP machine I connect to my home machine using Putty. Everything up to this point is working fine without any problem. The issue is I can't see more lines than the first few lines of the output. I press the space bar to get more output off the screen and the output scrolls up and it gets lost as the more output gets displayed on the screen. The Putty client i am using on my work machine has been setup with enough buffer size but the output still only displays few lines and as it moves up, the buffer gets empty automatically. I have searched the entire web and haven’t found any proper solution any where. Can someone please help here? Thanks.

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • NETSH : Set default ip address for an interface with multiple Ips

    - by elarichi.y
    To test a load balancer I need to switch my ip address several time a day, and keep other ips routing trough other Wans. I run these commands in a batch script: netsh interface ip set address "Connexion au réseau local" static %ipd% 255.255.255.0 192.168.1.1 1 netsh in ip add address "Connexion au réseau local" %ips1% 255.255.255.0 netsh in ip add address "Connexion au réseau local" %ips2% 255.255.255.0 ipd: is the default ip I want to set (all traffic should go trough it). ips1 and ips2 : are the secondary ips I want to keep but what ever I do all traffic goes trough one IP !! (first one in the range) Please help me with this issue.

    Read the article

  • Windows 2008 startup script will not run?

    - by larsks
    I am trying to get a very simple batch script to run when my Windows 2008 Server (R2) system starts up. I have added the script to the "Startup Scripts" in the local group policy by running gpedit.msc, and I see the script listed under Windows Settings/Scripts (Startup/Shutdown)/Startup when I run rsop.msc, but the script is not being executed. The "Last Executed" column in rsop is empty even after a reboot, and a file that should be created by the script is never created. At the moment, the entire contents of the script are: rem Check if this script is running. date /t > c:\temp\flag The target directory (c:\temp) exists. The script is called c:\scripts\startup.bat, and works fine if I run it by hand.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Prevent member of administrator group loging in via Remote Desktop

    - by Chris J
    In order to support some build processes on our Server 2003 development servers, we require a common user account that has administrative privs. Unfortuantly, this also means that anyone that knows the password can also gain admin privs on a server. Assume that trying to keep the password secret is a failed exercise. Developers that need admin privs already have admin privs so should be able to log in as themselves. So the question is a simple one: is there anything I can configure to prevent people (ab)using the account to gain administrator on servers they shouldn't have administrator on? I'm aware that devs could disable anything that is put in place, but that's then down to process and auditing to track and manage. I don't mind where or how: it can be via the local security policy, group policy, a batch file executed in the user's profile, or something else.

    Read the article

  • Blocking of certain file downloads

    - by Philip Fourie
    I have a problem where I cannot completely download a certain file from a server. The file is 1.9MB in size but only 68% is downloaded and then it hangs. I tried and these cases, which failed: Downloaded the file with HTTP Downloaded the file with FTP Moved the file to different FTP and web servers behind the ISA firewall Tried with IIS 6.0 & IIS 7.0 Multiple download clients. Which included FireFox, FileZilla (on Windows) and wget (on Linux) This worked: Downloading other files from the same location on the server. Both bigger and smaller and in size than the original. FTP and HTTP worked. Earlier version of this file (.DLL) works. It is as if the content of this file has an influence on this file being served. Network architecture: Client Machine - Internet (ISP) - ISA Server - IIS 7.0 The only constants are the ISP, Cisco router and the ISA server. Is it possible that something is rejecting the download because of the contents of the file? I am hoping ISA is the culprit... I am not a ISA expert is there somewhere I can look to establish if it is indeed ISA causing this? Update: Splitting the file into two parts with a hex editor results in one half of the file being served correctly and the other part not. Zipping the file results in the file being downloaded successfully. However this is not an option for this particular scenario. Renaming the file and its extension also doesn't work. Update 2009/10/22: It does NOT seems to be ISA that is causing this problem. We connected a laptop (running IIS) on an available public IP and still the file download to 68% before it hanged. The two remaining components are the ISP and the Cisco 800 series router. Anyone knows about an issue on the router perhaps?

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • Convert old videos to have smaller sizes

    - by Tim
    I have some videos from a few years ago,with various formats, such as avi, mpg, wmv, rm, rmvb, .... Their sizes are huge(more than 500 MB, and sometimes 1GB). Given there may likely be some advance in data compression, I would like to know which file formats and compression methods are recommended these days, by the standard that without losing obvious data, while achieving big size reduction. How can I perform the file format conversion and data compression in Ubuntu 12.04? Command line and batch ways would be the most convenient, although GUI ways are also appreciated. Thanks and regards!

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • Second CPU missing of Dual Core

    - by Zardoz
    My Lenovo T61 has a dual core CPU. I just noticed that under Ubuntu 10.10 only one CPU is recognized. I know that once both CPUs worked. Not sure since when the second CPU is missing. Maybe since the last kernel update. Currently I am using linux-image-2.6.35-23-generic (for x86_64). What can I do to enable the second CPU again? Here the ouput of /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 4189.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Any help is welcome. I really need that CPU power for my work here.

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • script to test mail server

    - by WebDude
    Ever since a windows update that took down my IIS6 mail server a few weeks back, I've been really paranoid about my mail server working. So every time I run a windows update I fire up command prompt and send myself a quick test mail. Like so: > telnet localhost 25 > helo domain.com > mail from: [email protected] > rcpt to: [email protected] > data some random body to mail myself . This is a realy great way to test my mail server, but it's a pain in the neck to do quickly. Is there anyway i can run this in a batch script or something as a quick test? I've tried a bat file but this just waits after i call telnet I've also explored if telnet accepts any input files and it does not seem to. What's the best way to do this?

    Read the article

  • Automatically copy files out of directory

    - by wizard
    I had a user's laptop stolen recently during shipping and it was setup with windows live sync. The thief or buyer's kids took some photos of themselves and they were synced to the user's my documents. I had just finished moving the users files out of the synced my documents folder when I noticed this. Later they took some more photos and a video. I wrote up a batch script to copy files out synced directory every 5 minutes into a dated directory. In the end I ended up with a lot of copies of the same few files. Ignoring what windows livesync offers (at the time there was no way to undelete files - I've moved onto dropbox so this ins't really an issue for me) what's the best way to preserve changes and files from a directory? I'm interested in windows solutions but if you know of a good way on a *nix please go ahead and share.

    Read the article

  • I need a script to lockdown the system time to users via gpedit.msc

    - by Chester
    I need to lockdown the system time on a number of PCs via gpedit.msc and then removing administrators from the group and then adding 'administrator' and 'polling'. Can I do this via a script? Essentially I have to; Run gpedit.msc Computer Configuration Windows Settings Security Settings Local Policies User Rights Assignment Double Click Change the system time Select Administrators Click Remove Click Add User or Group Type Administrator Click Check Names Type polling Click Check Names OK Apply OK Logoff I have to do this for a huge number of computers so is there a batch file I could run on each PC to do this? Your help would be very much appreciated. Best Regards

    Read the article

  • Advice on resizing 1280*720 for web audiences.

    - by jamiethompson90
    Forgive my spelling, I'm posting this from my mobile. I've recently decided to record videos to help teach a visual language. My camera likes to boast it can record in 1280; its a cheap camera about £75 so the quality isn't amazing. But its okay. Anyways, it has some other settings for lower res, but I figure might as well record in a larger size in case the need arises for a bigger source file in the future. I've been looking at jw player to play the converted files (mp4 to flv I think). What do you think a good size would be to convert to? I want to to look nice and clear remembering it is a visual language so lip patterns, facial expressions, body movement, fingers etc are all important, sound is not that important but I would like to have a choice to toggle captions. Thanks for any help, any advice apreciated, first time I have done a video project! P.s. If anyones interested its BSL. Jamie

    Read the article

  • On windows commandline, how do I get a dynamic prompt that tells me where in the filesystem I am?

    - by guneysus
    I am trying to modify my CMD, to show only current dir name dynamically like: Desktop $ When i switched the folder, it must be updated. It is not required to be code in purely batch file, it may depend any external commands, cygwin bash, etc. @echo off set a=bash -c "pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'" %a% cmd outputs _test-et Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. Tüm haklari saklidir. >> But >> prompt %a% gives bash -c "pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'"

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >