Search Results

Search found 41511 results on 1661 pages for 'via point'.

Page 64/1661 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Scripts help FIND command via atime output to multiple files

    - by sswagner
    here is a script I have wrote that I need help with. in the script I do a find for any file that has not been access for over 30 days, 60, 90, 180, 270 & 365 days. This works just fine. however, this takes a few days just to finish the 30 day portion. it is scanning a NAS. (millions and millions of files) as you see, the 30 day information really holds all the data need for the rest of the scripts. the 60, 90, etc. portion of the script are just redoing the same effort as the 30 day portion, except for an extended time frame. it would save in this case weeks worth of re-scanning if some how the 60, 90 180, etc.. portions could just get its data from the 30 day output. this is where I am asking for help. the output is just like an ls -l command. and you can also see from the output below, there are multiple years in this output. the script is attached and printed below. total 24 -rw-r--r-- 1 root bin 60 Apr 12 13:07 config_file -rw-r--r-- 1 root bin 9 Apr 12 13:07 config_file.InProgress -rw-r--r-- 1 root bin 0 Apr 12 13:07 config_file.sids -rw-r--r-- 1 root bin 1284 Apr 19 10:41 rpt_file -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/dat1_01.gif -rw-r--r-- 1 16074 5003 20088 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/set1_04.gif -rw-r--r-- 1 16074 5003 2008 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/get2_03.htm -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/per1_01.gif any help is appreciated. these are linux distro boxes, so I am sure perl is on there too if needed.. Thanks! !/bin/ksh # search shares for files that have not been accessed for a certain time. NOTE: $IN = input search $OUT = output directory for text file # TESTS Numeric arguments can be specified as # +n for greater than n, -n for less than n, n for exactly n. # -atime n File was last accessed n*24 hours ago. # # IN1=/nas/quota/slot_2/CR* IN2=/nas/quota/slot_3/CR* IN3=/nas/quota/slot_4/CR* IN4=/nas/quota/slot_5/CR* OUT=/nas/quota/slot_3/CR_PRJ144/steve mkdir ${OUT} for dir in ${IN1}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN2}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN3}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN4}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN1}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN2}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN3}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN4}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN1}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN2}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN3}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN4}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN1}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN2}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN3}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN4}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN1}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN2}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN3}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN4}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN1}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN2}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN3}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN4}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done

    Read the article

  • Recovering drive via boot to Win7 setup command prompt

    - by Valamas
    I am trying to recover data from two old IDE drives. Drive1 has been successful, but something is wrong with Drive2. It does not appear as a drive letter. Due to limited legacy hardware, the only way i can see these drives is to boot using windows 7 setup and goto the command prompt. Without going further as to why, my question is how i can access the data in this command prompt. I discovered DISKPART command and while a first time user, it looked like something that can fix my problem. Here are the results of my diskpart commands. At the bottom is a image of the commands taken with a camera. The Drive2 is present because when using the diskpart command, I can see it. How can I copy the information using a robocopy script if the drive letter is not available? how can I assign a drive letter? Is there any repair command I need to execute? When i execute DISKPART, the following is what i see. DISKPART> LIST DISK Disk### Status Size Free Disk 5 Online 37 GB 2048 KB So then I select disk 5. DISKPART> SELECT DISK 5 "Disk 5 is now the selected disk" When I list partition DISKPART> LIST PARTITION Partition ### Type Size Partition 1 Primary 101 MB Partition 2 Primary 37 GB So I select partition 2 "Partition 2 is now the selected partition." I then try to assign a drive letter DISKPART> ASSIGN LETTER=G "There is no volume specified." "Please select a volume and try again." When i list volume the drive is not present. DISKPART> LIST VOLUME Result of the above commands

    Read the article

  • Programmatically assigning an existing ssl cert to a website in iis6 via powershell or vbscript

    - by dagda1
    Hi, I have the following powershell script that creates a new website in IIS6: https://github.com/dagda1/iis6/blob/master/create-site.ps1 Does anyone know how I can assign an existing ssl cert to the website? I know I can set the port number using adsutil.vbs like this: cscript adsutil.vbs set w3svc/xxx/securebindings ":443:somewhere.com" But I am drawing a big blank when it comes to assigning an existing ssl certificate. Thanks Paul

    Read the article

  • One user always unsuccessfull in logging into the MFP via DSS

    - by jherlitz
    We use HP M4345 MFP's here with the HP DSS software. All the users can log into the MFP except for one particular user. I haven't been able to find out why. We are leaning towards her active directory account might be goofed. However I hate to delete the account and recreate it as it will cause a lot of extra work. Looking for any advice before we have to proceed down that road.

    Read the article

  • update ocz vertex le capacity via firmware update

    - by Ben Voigt
    I have an OCZ Vertex LE 100GB drive. It's actually 128GiB of NAND flash, with a whopping 28%+ reserved for write combining. Most 128GiB drives are actually ~ 115GB usable (and marketed as 120GB or 128GB). There were rumors that the reserved fraction could be decreased on OCZ 100GB drives. Can anyone provide a link to firmware that does that, or an official statement that no such firmware exists? (NB: I recently installed the 1.24 firmware from the OCZ site, it didn't affect the capacity. Possibly because the rumors say the capacity change is destructive to existing content.) Of possible interest: flashing firmware was more of a pain than it should have been -- the tool didn't detect the disk until I booted an older Windows install off a secondary hard disk, I suspect the Intel SATA driver is the issue and tool only works with the msachi.sys driver.

    Read the article

  • Sharing iTunes Library Between Mac & PC Via A NAS

    - by Franco
    Hi Everyone, I'd be really grateful if anyone can help me with this, I spent literally days trawling the net before I came across this site, which seems to have very knowledgable people! So, the problem is: For years I've had a couple of PCs and a NAS drive. I've been storing all my music on the NAS drive and then accessing the library on whichever PC I wanted to by pointing both PCs to the NAS drive iTunes files. The good thing is I can see all my playlists and song ratings etc. Now, I've just bought a Macbook Pro as well. And I want to be able to access the same music, song ratings etc on this machine. I've tried simply holding down option and navigating to the .itl files that my Windows machine created, but that doesn't work. Is there some way to use the same iTunes Library (apart from home sharing) on both machines? Thank you so much for reading this.

    Read the article

  • Intsalling Linux on PowerEdge R410 via USB

    - by Bill Johnson
    I’m hoping someone can help me with the following issue. I have a Dell PowerEdge R410 and basically the Optical Drive has failed when I have been given the server. I have installed 2 SATA drives and want to install Ubuntu 11.04; however, each time I have tried i.e. using bootable .iso on USB it failed. I assume it's failing as with a lot of releases they all look at the CD drive. Ubunutu has failed on installation with the error message unable to mount CD. I have tried installing Microsoft Hyper-v and that also fails as during installation it asks for CD/DVD drivers. Tried embedding ISO's from various distro's (Linux and Windows) with drivers and that hasn't worked out either. Does anyone have any idea on how I can get Ubuntu on this server? Should I look towards an old distro perhaps?

    Read the article

  • SHH Tunnel for Remote Desktop via Intermediary Server

    - by Mihai Todor
    I've seen many examples of SSH tunnels on the nets, but I'm still having no luck with this. Here's the setup: Windows 7 PC in a private network, sitting behind a firewall, with PowerShellInsider SSH server set up and working fine. Public access Linux server, which has access to the PC. Windows 7 laptop, at home, from which I wish to do remote desktop on the PC. Now, here's what I've tried so far: SSH tunnel from my laptop to the Linux server: ssh -f my_user@LINUX_SERVER -L 6666:LINUX_SERVER_IP:6666 -N SSH to the Linux server where I've set up a tunnel to the PC: ssh -f 'PRIVATE_DOMAIN\my_user'@PC_NAME -L 6666:PC_IP:3389 -N Unfortunately, I must be doing something wrong, because it doesn't seem to work. Any ideas why or, at least, any suggestions on how can I try to debug this setup? At the moment, I have access to all 3 machines (non-root on Linux), so I can test whatever I want...

    Read the article

  • Error when trying to access Shared files from iMac via smb

    - by SatheeshJM
    I used to access all my Windows XP shared files on my Mac using Finder -- Window -- Connect to server. Now all of a sudden, an error crops up when I try to connect. I get the error "There was a problem connecting to the server "192.168.1.*" The server may not exist or it is unavailable at this time. Check the server name or IP address, check your internet connection and then try again. How can I remove this error and access my shared files from my Mac? P.S my network connections is fine.

    Read the article

  • Programmatically assigning an existing ssl cert to a website in iis6 via powershell or vbscript

    - by dagda1
    Hi, I have the following powershell script that creates a new website in IIS6: https://github.com/dagda1/iis6/blob/master/create-site.ps1 Does anyone know how I can assign an existing ssl cert to the website? I know I can set the port number using adsutil.vbs like this: cscript adsutil.vbs set w3svc/xxx/securebindings ":443:somewhere.com" But I am drawing a big blank when it comes to assigning an existing ssl certificate. Thanks Paul

    Read the article

  • rm command and regular expressions via Linux BASH shell

    - by PeanutsMonkey
    I am attempting to use regular expressions to remove set of files however the bash shell returns the message rm: cannot remove `[0-99]+ -': No such file or directory rm: cannot remove `[a-zA-Z': No such file or directory rm: cannot remove `]+.[a-z]+': No such file or directory The command is [0-99]+\ - [a-zA-Z ]+\.[a-z]+ Questions Can I use regular expressions? If yes, how do I use them with commands such as rm, mkdir, etc

    Read the article

  • approx via inetd is not open to connection for others machines

    - by Cédric Girard
    I have an approx server to speed up Debian apt updates, on my Ubuntu 11.04 desktop PC, it had ran fine in the past, but today le 9999 port is open from localhost, but not for others PC. I have not modified inetd configuration at all. What can I check and try? inetd.conf 9999 stream tcp nowait approx /usr/sbin/approx /usr/sbin/approx approx.com # Here are some examples of remote repository mappings. # See http://www.debian.org/mirror/list for mirror sites. debian http://ftp2.fr.debian.org/debian security http://security.debian.org/debian-security volatile http://volatile.debian.org/debian-volatile # The following are the default parameter values, so there is # no need to uncomment them unless you want a different value. # See approx.conf(5) for details. $cache /espace/Dossiers/approx $max_rate unlimited $max_redirects 5 $user approx $group approx $syslog daemon $pdiffs true $offline false $max_wait 10 $verbose false $debug false I tried to allow others PC to connect with a "ALL: ALL" in hosts.allow. ufw is disabled, iptables-save is empty.

    Read the article

  • Trusted Sites via GPO: <*.> gets left off

    - by HannesFostie
    As stated in the question title: one of our end users has to make use of a web application which requires her to add the website to trusted sites. By default, this is disabled, but after my colleague added the sites to the GPO pushing these in such form: *.domain.com it shows up as domain.com in her trusted sites. Has anyone encountered or even fixed this issue?

    Read the article

  • Ext4 Input/Output Error Reboot via SSH

    - by LorenVS
    I've got a remote appliance, and its disk IO seems to have locked up, trying to run anything that isn't already loaded results in errors like this: $ sudo shutdown -r 0 sudo: Can't open /var/lib/sudo/<machine_name>/0: Read-only file system sudo: unable to execute /sbin/shutdown: Input/output error I have SSH access to the appliance. I'm hoping that restarting the box will fix this (if not I have to go replace the box), but trying to restart it yields the above output. Anyone have any ideas???

    Read the article

  • Instant MSI deployment via GPO

    - by HannesFostie
    Is it possible to instantly deploy a certain piece of software by creating a GPO in Active Directory? I realize it's possible to do this but only after rebooting the computer, and that is something we don't always want to do, especially since some of the software I want to deploy on servers. What are the options? One thing to note is that most of our users do NOT have admin rights as I am talking not only about servers but also about workstations in the classrooms.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • How to retrieve virtual machines from a pool via API in oVirt (RHEV)

    - by FerCa
    In oVirt (Red Hat Enterprise Virtualization) you can create a Virtual Machines Pool to allow your users to retrieve virtual machines from this pool. I found how a user, in the RHEV User Portal, can request a Virtual Machine from the pool, this is explained here: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html/Evaluation_Guide/Evaluation_Guide-Allocate_VM.html The thing is that i will need to retrieve virtual machines from the pool with the REST API and, after reading the documentation (https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html-single/REST_API_Guide/index.html) I cant found the way to do this.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Configure vlan on Netgear switch via SNMP

    - by Russell Gallop
    I am trying to configure vlans on a netgear GS752TSX from the Linux command line with netsnmp. I have created vlan 99 on the web interface now want to control the pvid settings, egress and tagging. I have identified these as the MIBs I need to change: dot1qPvid.<port> dot1qVlanStaticEgressPorts.99 dot1qVlanStaticUntaggedPorts.99 Pvid works as I expect: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qPvid.17 u 99 Q-BRIDGE-MIB::dot1qPvid.17 = Gauge32: 99 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qPvid.17 Q-BRIDGE-MIB::dot1qPvid.17 = Gauge32: 99 and so do the egress ports: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticEgressPorts.99 x 'ff ff ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00' Q-BRIDGE-MIB::dot1qVlanStaticEgressPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticEgressPorts.99 Q-BRIDGE-MIB::dot1qVlanStaticEgressPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 But untagging the ports doesn't seem to remember my setting: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticUntaggedPorts.99 x 'ff ff ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00' Q-BRIDGE-MIB::dot1qVlanStaticUntaggedPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticUntaggedPorts.99 Q-BRIDGE-MIB::dot1qVlanStaticUntaggedPorts.99 = Hex-STRING: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 I have tried netsnmp 5.4.1 and 5.7.2. Is there something I'm doing wrong?

    Read the article

  • Find edited Word attachment opened via Windows Mail (Vista)

    - by Tony Meyer
    If you open (not save) a Word attachment in Windows Mail, then it will open the document up in Microsoft Word. This is saved somewhere, and you can edit the file and save it (i.e. no "save as" dialog appears). If you then close Word without explicitly saving the file somewhere, how do you then find the edited file? (Assuming that it can be found, and isn't automatically deleted). Windows Search does not find the file (presumably because it's in the folders that are excluded from search).

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • automating sql express backup via VSS backup

    - by Ornus
    I need to set up on my server automated daily SQL db backups (sql express, so no maintenance plans). To keep things simple I'm gonna use a backup solution (JungleDisk) that uses VSS to back up the DB file. SQL fully supports VSS and on requests freezes DB I/O, so I understand I'm taking snapshots. JungleDisk supports doing differential back up and compression, so it simplifies things and keeps the cost/bandwidth down. Is it enough to just backup up db file (mdf). Do I need to back up transaction log (ldf) file as well? I'm ok with losing a day's worth of work (since the last backup). if I go this route, what's the best way to restore the database? are there any issues with this approach I'm not aware of?

    Read the article

  • Connecting a laptop to a TV via HDMI

    - by Madmartigan
    I just bought a new Dell XPS17 laptop (Win7) that only has HDMI output. My last 2 laptops had VGA, which I used to connect to my Sony Bravia 32" TV with no issues, but with the HDMI it's been quite a headache. Drivers for display adapters have been updated to the latest versions: Intel(R) HD Graphics Family NVIDIA GeForce GT 550M I went to a store and plugged in to 4 different TVs from different manufacturers. A sales rep and I spent about 30 minutes being baffled by the results (which are the same as my current TV): Extreme buggy behavior in the Nvidia and Windows display/resolution control panel Can not extend or duplicate displays, can only select one Third and fourth output devices "randomly" detected by the Windows control panel Could not get the screen to fit the output (edges cut off on all sides by about a half inch) Resolution and colors less than perfect. Artifacts around text. Display "randomly" cuts out Defaults to TV output only when plugged in Can not change resolution on either device when connected No audio from the TV Plugged in to 3 monitors from different manufacturers: Defaults to duplicated displays when plugged in Everything works perfectly So far, four people have gone through all the settings in the latop with no luck. I had similar, but not exactly matching results with a different laptop. I'm using the Sony Bravia currently at home, but in order to get it to work I have to turn on the laptop, wait until the display shows up on it, close the lid, then cycle through each output channel on the TV until I come back around to the HDMI port again, but still I have the symptoms described above. However: Once in a while, it just works. Sometimes, seemingly randomly, the output fits the screen perfectly. Sometimes the audio comes through the speakers too, but not always. Usually my screen saver "Mystify" will come up with a message that it cannot be displayed due to a limitation of the video card, but then sometimes it works fine. These 3 things seem to be independent of each other and don't always happen together. So, is there any way to get the laptop to output correctly to a TV, or is it just not meant to be?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >