Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 322/479 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Trobleshooting extremely slow opening times in Win7 for documents on Win2k8 server

    - by Mazupan
    Hello. It's hard even to describe my problem. It seems there's only problem with extreme slow openings (up to 10 minutes) on Windows 7 (on XP things works fine) for files that are stored on Windows Server 2008. And now what I discovered up till now. If I open (some files, not all, not allways) .doc and .xls files with doubleclicking it takes up to 10 minutes to finaly open the file. In that time, file seems to be locked for all other users. If I cancel opening, file remains locked for some time. Owner on that files is the one who last wrote changes in them. If I change the owner to larger group, which I am member of file gets opened super fast. When opened file can be saved normaly and fast. That file reopens fast. One other user reports that there is only problem when opening the files for the first time in a day. When he openes first file he has no problems with other files at all (or so he says). He also states that when accessing files from home via VPN he has no such problems with files. And now: anybody has a clue where to start looking? I suppose that is misconfiguration problem. But where? File system? Permissions? DFS? VMWare network config? My setup is as follows: Physical server: HP Prolian ML350 G6 Virtual host: VMWare ESXi 4 Guest: Windows Server 2008 Standard Files are accessed via DFS shares. Please help me. Thanks. Mazupan

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • I have bought a custom domain and am using it with Gmail. All My mail is being sent as spam. What can I do?

    - by Leonnears
    A while ago, I purchased my own custom domains for my websites. Before I moved them to Gmail, I just created the e-mails in my CPanel at Bluehost.com and worked from there. When the setup was like that, I could send and receive e-mail fine, and it wouldn't be marked as spam. Now I have moved these custom domains to send and receive e-mails at Gmail using Google apps. I have done everything. I have marked the domains as "Authorized" and I believe that should be enough for the mail I send with these custom e-mails is not send as spam. If it matters, I have configured my iPhone to use these custom domains with it and I'm sending all the e-mail from it. What can I do? I started doing all this today but apparently the DNS changes have already taken place. Is there something I have to do, or is it a matter of waiting 48 hours for my mail to not be marked as spam by other providers yet? EDIT: If I send mail via Gmail itself, the mail is delivered fine. If I use my iPhone however, it gets marked as spam.

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • How can I create two partitions and clone one to the other (using Clonezilla)?

    - by johnny
    I was hoping someone could help. I want to create a "backup" partition. I want to create two partitions on my drive. One is a good install, which I want to then use clonezilla to copy the good partition to the broken/unused partition and have the restored partition boot up as usual. Example, C: goes bad. D: is a "good" copy of C. C gets a corrupt registry. I restore D to C, C will then boot up as usual. So, I need to do the clone with Clonezilla and the restore with the same. I see the part_...clone and restore. Will this do it? How do I get the partitions? EDIT: I am using XP. How can I do this? Also, I know this is not the best thing for all occasions. I have a offline backup as well. I would like to have both. Thanks for any help. I'm using Clonezilla if it matters.

    Read the article

  • Is there any method of backing up Google Drive files in some sort of versioning system?

    - by VictorKilo
    Backstory My company is utilizing Google Drive for our shared files. Each user has their own Drive account. In addition, we have a corporate Drive account which holds documents which are shared to each user. Each folder is shared to different users depending on their permissions and positions in the company. Many users are able to add files, and updated folders within this shared Drive account. This is fine. What is not fine, is when someone deletes something that they shouldn't. I have little to no way of knowing when I file is deleted wrongfully. Furthermore, anything that gets deleted goes into the trash bin of the file's creator, so I can't just restore it from the trash. Question Is there any method of backing up Google Drive files in some sort of versioning system that would allow me to revert files back to defined points in time? What i have Tried I currently have this corporate drive account synced up to my personal computer through the Google Drive application. Each night, I run a backup on the file using Windows "Backup and Restore." This allows me to at least get back files that are lost, but I a cleaner method than this. It's very possible that I may not have the very latest version of a document on my computer when the utility runs.

    Read the article

  • HP Laptop recognizes hard drive just long enough to install windows

    - by Joe
    I have an HP laptop, DV6500 (CTO). It refused to boot one day, so I ran some diagnostics (a friend lent me "Hirens Boot Disk", "UBCD" and "PC DR 6"). Everything passed, except for the hdd. I replaced the HDD with a used drive of unknown condition. Installed windows with no problems. Installed the wireless driver, tried to reboot ... no luck. So I went to Best Buy, bought a brand new Western Digital 320gb HDD. Put it in the machine, installed windows (vista home premium). Installed the wired networking driver. Tried to reboot. No luck. Put the first hdd back in the machine, reinstalled windows. Started to install some drivers, went to reboot, and the machine won't come back to life. Put the second hdd in the machine, rinse wash and repeat. I've replaced the memory, even though it passed diagnostics. Problem exists with both brand new memory, and old memory. The BIOS recognizes the hard drive. The computer freezes directly after the bios splash screen, and there is no hard drive activity light. I've tried two linux live distros (gentoo and ubuntu). Neither would run on this laptop, but will on a different HP laptop. UBCD and Hirens Boot Disk both ran, as did PC Doctor 6 which refuses to test anything (gets stuck at "enumerating hard disks"). Is there anything else I can try?

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • Powershell Script Scheduled Task Stopped Running (Could not Start)

    - by Hatsune Yuki
    I'm running a scheduled task (for Powershell Script) on Windows 2003 Server. I believe the script works fine. The task is scheduled to run every 10 minutes from 7:00am to 11:50pm everyday. However, it never gets to run more for than a day. It always stops some time in the afternoon (between 2pm and 6pm). I'm not sure exactly what happened but I always get the error The attempt to log on to the account associated with the task failed, therefore, the task did not run. The specific error is: 0x80070569: Logon failure: the user has not been granted the requested logon type at this computer. Verify that the task's Run-as name and password are valid and try again. It seems like most people with this error are saying that they need to make user "logon as a batch job". However, this option is greyed-out for me. I search for other places where users have similar problems but the solutions are not written in detail (some of them have something to do with GPO). I've only used the basic features of Windows Server and I have no clue how to get to the place they are referring to. Can someone please confirm whether "logon as a batch job" is indeed a solution and provide a detailed walkthrough on how to solve my problem? Thanks. p.s. someone suggested the website http://technet.microsoft.com/en-us/library/cc755659(v=ws.10) I tried to followed the method for web server with domain. However, got stuck on the 6th step where it mentions Group Policy Object. I don't know where it is.

    Read the article

  • Computer randomly shuts itself off

    - by Decency
    I have not been able to determine a pattern for why this happens, despite my best efforts. I've attempted to run it on full power with Prime95 and this doesn't trigger a restart. Generally the restarts occur while I'm playing games, watching videos, or even just having multiple tabs open in a browser. However, I often play processor intense games for hours without any restarts occurring, and sometimes they'll happen 3-4 times in an hour during less intense activity, so I don't think that is the problem. I imagine it has something to do with overheating or power consumption so I've been monitoring CPU temperature and cleaning with compressed air, but the problem keeps happening. I don't know how to track power consumption, and assume that this is the problem. Whenever this occurs, the sound gets stuck in a short loop of whatever was playing at the time, though restarts also occur when nothing is playing. Here is a screenshot of temperatures: and under load: Here's the parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=10546754 As shown in the list, the case includes a 585W Power Supply, which I've been told should be plenty. I built the computer myself with a friend's guidance but it's very possible I did something wrong. Right now I'm looking into ensuring that I have the latest drivers for all components. Any help would be appreciated- thanks.

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • Cannot terminate process, "already terminated"

    - by felix-freiberger
    On Windows 8, I regularly get processes into a state where I can't terminate them. Skypekit.exe seems to be the process that's most likely to trigger that issue, but other processes can do that, too. When I try to terminate these processes, I sometimes get an "access denied" message, sometimes nothing happens - but every following attempt to kill that process results in an "access denied" message, too, even though I... have administrative rights (and ran the task manager with it) own that process have the right to terminate it "Process Hacker 2" shows a more detailed error message, stating that I couldn't terminate the process because it already is terminated. Still, the process is most definitely still there, because every task manager I tested still can see it. Process Hacker's "Terminator" is unable to kill such a process, but when running the "Close the process' handles" tactic, Process Hacker gets stuck himself, leaving its windows in "not responding". In that state, other task managers are in turn unable to kill Process Hacker. The only way I found to actually end these processes is to shutdown (which works without any problems). Why is this happening? How can I kill these processes?

    Read the article

  • PHP-FPM processes holding onto MongoDB connection states

    - by Brendan
    For the relevant part of our server stack, we're running: NGINX 1.2.3 PHP-FPM 5.3.10 with PECL mongo 1.2.12 MongoDB 2.0.7 CentOS 6.2 We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e: try { $mdb = new Mongo('mongodb://localhost:27017'); } catch (MongoConnectionException $e) { die( $e->getMessage() ); } $db = $mdb->selectDB('collection_name'); Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn. Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run. When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time. Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.

    Read the article

  • Can a OS be copied from one hard drive to another and still boot?

    - by AlexMorley-Finch
    Background My computer gets stuck on the make and model screen after the BIOS screen, aka the Toshiba screen. After some research I've realized that the problem is the hard drive. I'm using an old 250gb model that USED to be used for backup purposes, however I loaded windows 7 ultimate onto it This hard drive has trouble getting up to full RPM therefore cannot boot correctly until its warmed up. meaning that my pc needs to be restarted several times before it boots (once it took my 13 reboots to get my pc on!) From my research its either that, or lack of power supply, and I've tried multiple PSUs. Question I have my OS and all my files on this 250gb HDD... If I were to literally open the explorer, and copy EVERYTHING (including hidden files obviously) from this 250gb, to a spare 500gb I've got knocking about... Will it boot if I just copy everything? I cannot be bothered to load another OS onto my PC so if there is a way I can just copy the existing one over from one HDD to another and have it boot normally. This would be epic! I've heard about HDD cloning software. But before I purchase and/or download this software, I need to know if i can just copy the OS over through the windows explorer

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • TCP/UDP hole punching from and to the same NAT network

    - by Luc
    I was wondering if tcp/udp hole punching would still work when you are in the same network (behind a NAT), and what the packet's path would be. What happens when using hole punching on the same network, is that it will send a packet out with the same destination and source address. Only the source and destination port would differ. I imagine a router with NAT loopback enabled will handle this as it should, but how about other routers? Would they drop the packet, or would a router (the first?) from the ISP bounce the packet back after which it gets handled okay? I'm wondering because I was thinking about using this technique to circumvent a block between peers in a network (like a school network where clients can only access the internet, but any contact with each other is blocked). The only other option is to use a man in the middle as proxy (tunnel?). The disadvantage of this is that you have to have a server with significantly more bandwidth than one that would only do hole punching. Also the latency would increase significantly.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >