Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 313/479 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Digitize video?

    - by kire
    I got some movies on a VCR that i want to move to my computer somehow. (for personal use) I have a video capture card and all hardware required. I'm looking for the software - programs, codecs etc. I like the format that most torrents come in and got some of these i'd like to use as reference for comparison. How can i see what codec, bitrate, etc a movie is using so i can pick this and know that it will work and look good. For AVI-files the bitrate is visible in explorer but it doesn't mention the codec used and i also have a lot of MKV-files that explorer can't handle. All kinds of tips, tricks and other suggestions are welcome. This is completely new to me. How do I avoid that video/audio gets out of sync for example, many movies you download have audio out of sync so i guess this can happen quite easily. The encoding-program has to run on windows and for playback the movies should work on at least VLC for windows.

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • Can I get ethernet out of my Verizon FIOS set-top box?

    - by Tom Hughes
    Setup: my home network is long & skinny, and the FIOS-connected router is all the way at one of the apartment. At the other end, far away (and a floor higher) is my HD TV, which gets a cable-TV signal from a Verizon set-top box that is coax-connected back to the FIOS on-premises equipment. Wi-Fi won't work, the apartment is too stretched out, with old, thick walls and floors. Goal: I think there are three ways to get ethernet back to where the HD TV is: 1) run a cable! this isn't crazy but isn't cheap either (my building won't let me do it, it involves hiring an electrician because the cable would run partly through the public hallway ceiling) 2) split the coax near the TV and put in... a MoCA device? 3) somehow tease the set-top box, which has an RJ-11 (ethernet) port on the back, to give me network access. Question: any other choices? and, is one choice better than the others? #3 is by far the most desirable because it would involve the least wiring -- but I can't find any resources to help make it happen. #2 is a bit scary, I don't want to degrade service to the TV or anywhere else for that matter.

    Read the article

  • Postfix with relayhost - relay access denied for bounces

    - by Alex
    I have set up a Postfix Mailserver, outgoing mail is being sent through a smarthost/relayhost which requires authentification. That works great, internal clients can send to foreign recipients though this relayhost. However, when an external mail for a local, non-existent user arrives at the server, postfix tries to send a non-delivery notification to the sender. This mail is also sent through the relayhost obviously, but it fails with error 554 5.7.1 : Relay access denied This gets logged to the mail.log: Nov 9 10:26:42 mail postfix/local[5051]: 6568CC1383: to=<[email protected]>, relay=local, delay=0.13, delays=0.02/0.02/0/0.09, dsn=5.1.1, status=bounced (unknown user: "test") Nov 9 10:26:42 mail postfix/cleanup[5045]: 85DF9BFECD: message-id=<[email protected]> Nov 9 10:26:42 mail postfix/qmgr[4912]: 85DF9BFECD: from=<>, size=3066, nrcpt=1 (queue active) Nov 9 10:26:42 mail postfix/bounce[5052]: 6568CC1383: sender non-delivery notification: 85DF9BFECD Nov 9 10:26:42 mail postfix/qmgr[4912]: 6568CC1383: removed Nov 9 10:26:43 mail postfix/smtp[5053]: 85DF9BFECD: to=<[email protected]>, relay=mail.provider.com[168.84.25.111]:587, delay=0.48, delays=0.02/0.01/0.26/0.18, dsn=5.7.1, status=bounced (host mail.provider.com[168.84.25.111] said: 554 5.7.1 <[email protected]>: Relay access denied (in reply to RCPT TO command)) Nov 9 10:26:43 mail postfix/qmgr[4912]: 85DF9BFECD: removed According to this error, I suppose that postfix does not login at the relayhost when sending those bounces. Why? Normal outgoing mail works just fine. This is how my main.cf looks like: http://pastebin.com/Uu1Dryxy And of course /etc/postfix/sasl_password contains the correct credentials for the relayhost. Thanks in advance!

    Read the article

  • What do the readonly attributes in diskpart really mean?

    - by marzipan
    I am wondering exactly what the meaning is of the "Read-only" disk and volume attributes that you can twiddle in diskpart on Windows 7. I am trying to set up an external USB drive as an installation medium for my own software, so I'd like to protect it against casual or inadvertent changes by users who it is given to, so they don't screw up the installation files they might need in the future. From what I can tell by experimentation with diskpart, the volume read-only attribute is actually stored on the physical disk somewhere, because I can set it and it shows up when I take the drive to another machine. This is great because my users can't (easily) change any of the files on the volume, or format it from Windows explorer. However, the disk read-only attribute seems to be just an aspect of how the current machine is accessing the drive. When I set it I can no longer delete the volume in the disk via Disk Management, but when I take the drive to another machine, the attribute is no longer set and in Disk Management I can delete the volume on the disk. I guess I'm not that worried about my users doing that, but I am annoyed that I don't understand what these attributes are really doing. Another thing that I don't understand is that the "volume" read-only attribute actually seems to be global to the disk - if I have two volumes on the disk, and I set the readonly flag on one of them, then it gets set on the other one too. ?!? I have the feeling I'm not searching for the right docs - all I'm finding is diskpart docs that give the syntax for twiddling these attributes, not what they really mean. Any pointers would be very welcome! Thanks, Asa

    Read the article

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • USB 3.0 hard disk not detected on a particular host controller?

    - by Alvin Wong
    I have a USB 3.0 hard disk which has always been working on my desktop with an XHCI. Now I just bought a notebook with an XHCI (something with Intel's Ivy Bridge setup). The first time I plug the hard disk in its 3.0 port it is detected and working. A few hours later I try to connect it again, but seems that the notebook just ignored it! The light on the hard disk didn't blink as usual (instead it is hold at on). I then tested it with my desktop again and it is working perfectly. It gets trickier when I plug it in the USB 2.0 port of that notebook it is detected and working perfectly (despite the slower speed). Then I try to plug in an USB 2.0 USB flash drive to that USB 3.0 port, and it is detected (of course as USB 2.0). So, there are two USB 3.0 ports on my notebook's XHCI. Both of them are not working with my hard disk but working perfectly fine with my USB 2.0 UFD. What's wrong with it? When I plug in the hard disk, device manager doesn't change. I've tried re-installing the driver for the XHCI, but it changes nothing. Had I broke the USB 3.0-specific pins of both USB 3.0 ports?

    Read the article

  • Linux DHCPD Mac-Address based Groups

    - by GruffTech
    Our Current DHCPD.conf looks like the following. subnet 10.0.32.0 netmask 255.255.255.0 { range 10.0.32.100 10.0.32.254; option subnet-mask 255.255.255.0; option broadcast-address 10.0.32.255; option domain-name-servers 208.67.222.222,208.67.220.220; option routers 10.0.32.5; host Dev-ABaird-W { hardware ethernet 00:1D:09:3E:49:13; fixed-address 10.0.32.94; } ... more static hosts .... } About as basic as it gets. The old router is 10.0.32.1, our company wanted to implement a squid proxy to better monitor web traffic while at work, and if necessary block large time-wasters, IE Facebook.com. However, we've quickly realized that this change has played a mean prank on our Polycom SIP Phones. Occasionally our phones will not ring, the end recipient hears ringing (this is artificially created by our PBX) however the handset never rings. The ONLY thing that has changed in our network is the option routers line. So, Since all Polycom MAC addresses begin with 00:04:F2 would it be possible in DHCP to say any 00:04:F2:::* MAC addresses get option routers 10.0.32.1, and anything else must talk with our Gateway?

    Read the article

  • Pair programming with tmux and Vagrant

    - by neezer
    Does anyone have a clear step-by-step guide for setting up a shared tmux session on a Vagrant vbox that my coworkers (on our local office lan) could SSH into? The articles I've found online only seem to cover setting this up from machine to machine (no virtualbox setups), and I'm not very good at networking, so I haven't been able to extrapolate a solution... We're all running the latest Macs in our office, btw. Here's one article I've found but haven't been able to get working with Vagrant: http://blog.voxdolo.me/remote-pairing-with-vim-and-tmux.html EDIT: To clarify, I don't really know how I should be setting up Vagrant to allow me to SSH into it from a machine outside the one hosting the VM. The article above suggests that I add the tunnels host on my physical machine running the VM (here-on referred to as the MBP), so I did that. Next is the ProxyCommand host declaration, which I have also assumed should live on the MBP. So next I try SSHing into the MBP from a guest machine (another separate physical machine on my network), and that seems to work... but that only gets me into the MBP, not the Vagrant image running on the MBP. I normally login Vagrant image on the MBP via vagrant ssh (per the docs), and I know how to forward ports on the Vagrant VM to the MBP, but it's unclear to me how I could forward ports/SSH from the MBP to the Vagrant VM, which I assume I would need to do so that my guest machine could SSH in--through the MBP--to my Vagrant image. That, in a nutshell, is what I'm trying to accomplish. I do my development work in Vagrant VMs which keeps my MBP nice and clean of any dev-related cruft and also keeps my dev environments totally isolated from one another, yet I would like to start pair-programming with my coworkers via tmux, thus the reason why I've asked this question. I would like to accomplish all of this without setting up an additional user account on the MBP, or giving my coworkers access to my local user account on the MBP to get to my Vagrant VM, if that's at all possible.

    Read the article

  • Cable management techniques

    - by cornjuliox
    How do you manage the giant jungle of cables behind your PC? When you have 2 or more PCs next to each other, you wind up with this giant mess cables that's a pain in the neck to clean especially when both computers are running 24/7 and any fidgeting with the cables is likely to cause data loss and/or angry users. So far I've tried masking tape, cable ties and plain old string but none have been very effective. The masking tape kept the cables in place, but over time they ended up leaving this awful sticky residue on the sides of the cables that just won't come off gets all over your fingers and is horrible horrible horrible. I have nightmares about that stuff. We used cable ties and 'folded' up some of the longer cables so that they weren't any longer than they needed to be, but this meant that the position of some of our devices like the keyboard and the mouse were essentially 'fixed' until we removed the ties. The string didn't work much differently and required that we tie them properly or risk it coming loose. I would switch to a wireless keyboard and mouse, but I don't want to have to deal with the added expense of batteries, even the rechargable ones. Plus I don't want them to die on me at a crucial moment (happened to me once while playing Firearms _<). How do large offices and data centers manage their masses of cable?

    Read the article

  • Cable management techniques

    - by cornjuliox
    How do you manage the giant jungle of cables behind your PC? When you have 2 or more PCs next to each other, you wind up with this giant mess cables that's a pain in the neck to clean especially when both computers are running 24/7 and any fidgeting with the cables is likely to cause data loss and/or angry users. So far I've tried masking tape, cable ties and plain old string but none have been very effective. The masking tape kept the cables in place, but over time they ended up leaving this awful sticky residue on the sides of the cables that just won't come off gets all over your fingers and is horrible horrible horrible. I have nightmares about that stuff. We used cable ties and 'folded' up some of the longer cables so that they weren't any longer than they needed to be, but this meant that the position of some of our devices like the keyboard and the mouse were essentially 'fixed' until we removed the ties. The string didn't work much differently and required that we tie them properly or risk it coming loose. I would switch to a wireless keyboard and mouse, but I don't want to have to deal with the added expense of batteries, even the rechargable ones. Plus I don't want them to die on me at a crucial moment (happened to me once while playing Firearms _<). I know that there are people out there with home/office networks a thousand times more convoluted than mine, so

    Read the article

  • Understanding NFS4 exports and pseudofilesystem

    - by Trevor Harrison
    I think I understand the way pre-NFS4 exports work, specifically the namespace of the exported point. (ie. export /mnt/blah on server, use mount server:/mnt/blah /my/mnt/point on client) However, I'm having a hard time wrapping my head around NFS4 exports. What I've been able to gather so far is that you export a 'root' by marking it with fsid=0, which you then import on the client side by referring to it as '/'. (ie. exportfs -o fsid=0 /mnt/blah on server, mount server:/ on client) However, after that, it gets a little weird. From my playing around, it seems I can't export anything else thats not under /mnt/blah. For example, exportfs /home/user1 fails when trying to mount from the client unless /mnt/blah/home/user1 exists on the server. If this is the case, what is the difference between exportfs /mnt/blah/subdir1 on server and mount server:/subdir1 on client and just skipping the exportfs and mounting whatever subdir of /mnt/blah you want? Why would you need to export anything other than the root? Its all in the same namespace anyway.

    Read the article

  • Access Denied on Some Subfolders/Files Within a Share

    - by Tim
    First thing this morning, I find that users on one of our share drives are all getting "access denied". I tried the same drive and also received "access denied" as a Domain Admin. Previous to this, all specified users and admins could get access. I checked share permissions I checked NTFS permissions I temporarily made both types of permissions read/write to "Everyone" -- This worked for one user It turns out that this is occurring for only some files/folders. When I try to manually alter the share of that single share, it can't be shared, access denied. xcacls also gets access denied rebooted the server (not a big deal - this is a smallish company). Does anybody have any insight, my google-fu is coming up blank. Thanks. EDIT: More info, I just ran AccessEnum. There were a lot of "access denied", but I noticed the pattern that all of the access denied had a parent with an owner of "???". When I look at the properties, the "Unable to display owner" message is in the box and I can only make my user account the owner. I can then share the individual file/folder, but it doesn't seem to propogate down to subfolders/files.

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Secretary cannot add appointments to boss's calendar after exchange restore from backup

    - by therulebookman
    The calendar is the Boss's calendar on Exchange. I have set permissions for it through his Outlook to give the secretary and a few other people "Editor" access to his calendar. All the editors can view the calendar, but only he can add new appointments. Anyone else who tries to add an appointment gets "The item cannot be saved in this folder. The folder was deleted or moved or you do not have permission." The permissions are correct, editor. The item hasn't been deleted or moved. It's in his mailbox on exchange. The message says something about the mailbox size, but he is well under the size limit anyway. He is using Outlook 2003, and I have tried accessing it from 2003 and 2007, but I don't think that is related I tried clearing the forms cache and enabling disabled items: no disabled items and clearing cache didn't help. I also tried "Allow all forms" but this apparently doesn't apply in this scenario as we are not using any custom forms. Is there any way to delete just his calendar and then I can exmerge it back in (after exporting to PST of course)? I really can't exmerge out his mailbox, delete it, and exmerge it back in because he works all sorts of hours, but if this is the only way, then I'll have to do it. Is there any other possible solution?

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Programs don't have permissions when using absolute path

    - by Markos
    I have asked this on askubuntu but didn't get a single response in days, so I will try it here. I have directory structure like this: /path/dir1 - all users in group1 must have rwx permissions, including subdirs and newly created dirs /path/dir1/dir2 - also users in group2 must have rwx permissions So what I tried is that I used ACL. getfacl /path/dir1 # file: /path/dir1 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:mask::rwx default:other::--- getfacl /path/dir1/dir2 # file: /path/dir1/dir2 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx group:group2:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:group:group2:rwx default:mask::rwx default:other::--- That shows that I have granted rwx to group1 in /path/dir1 and rwx to group1 and group2 in /path/dir1/dir2. Now it gets interesting. Let's assume, that user2 is member of group2. If I issue commands as user2: cd /path/dir1/dir2 mkdir foo Then folder is succesfully created. However, if I do this: mkdir /path/dir1/dir2/foo I get permission denied error. I have tried extensively to resolve the problem. What I have found is that ACL is to blame. If I add permissions to group2 in /path/dir1 it starts to work. Also if I completely remove /path/dir1 ACL it starts to work. Obviously I am missing something VERY basic. I don't have much experience with linux, but this is a no-brainer on Windows. I have spent way too many hours to resolve this basic requirement. If you need more information, I will try to update the question, so feel free to ask!

    Read the article

  • Ubuntu Laptop as a wireless hotspot on bridge mode

    - by nixnotwin
    I have a wired router to which my ubuntu laptop connects via ethernet. The wierless NIC of the laptop acts as a wireless hotspot on master mode. I use hostapd fo this. I have bridged eth0 and wlan0, so my wireless clients that connect to my laptop over wifi get ip from the wired router via dhcp, so the devices get registered at the wired router ( and the laptop is just an access point). I use the following commands to get my laptop+accesspoint working: sudo brctl addbr br0 sudo brctl addif br0 eth0 sudo hostapd /etc/hostapd/hostapd.conf & sudo dhclient -d br0 & sudo ifconfig wlan0 192.168.1.15 netmask 255.255.255.0 up sudo brctl addif br0 wlan0 These commands enable me to access internet on my wireless clients and also on the laptop which is acting as wireless accesspoint. But if I reboot the wired router (without rebooting the laptop that is acting as accesspoint), Internet access on the laptop+accesspoint gets lost, but on wireless clients it works fine. Even I have not been able to figure out a command which will reset the laptop interfaces to default settings, so everytime the router reboots, I have to reboot the laptop too to get into default settings so that I can re-enter the above mentioned commands. My first question is How can I have my bridge+accesspoint up and running even-though the router reboots? And is there a command to set the interfaces to a default state? (ifdown -a doesn't work, after issuing the command the bridge still remained).

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >