Search Results

Search found 6312 results on 253 pages for 'sad admin'.

Page 198/253 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • apache pointing to the wrong version of python on ubuntu how do I change?

    - by one
    I am setting up a flask application on and Ubuntu 12.04.3 LTS EC2 instance and everything seemed to be working well (i.e. I could get to the webpage via the publicly available url) until I tried to import a module (e.g. numpy) and realised the apache python differs from the one I used to compile the mod_wsgi and also the one I am using I am running apache2. The apache2 logs show the warnings (specifically the last line shows the path hasnt changed): [warn] mod_wsgi: Compiled for Python/2.7.5. [warn] mod_wsgi: Runtime using Python/2.7.3. [warn] mod_wsgi: Python module path '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib$ I have tried to set the path in my virtual host conf (my python is located in /home/ubuntu/anaconda/bin along with all of the other libraries): WSGIPythonHome /home/ubuntu/anaconda WSGIPythonPath /home/ubuntu/anaconda <VirtualHost *:80> ServerName xx-xx-xxx-xxx-xxx.compute-1.amazonaws.com ServerAdmin [email protected] WSGIScriptAlias / /var/www/microblog/microblog.wsgi <Directory /var/www/microblog/app/> Order allow,deny Allow from all </Directory> Alias /static /var/www/microblog/app/static <Directory /var/www/FlaskApp/FlaskApp/static/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> But I still get the warnings and the apache python path hasnt changed - where do I need to put the relevant directives to point apache at my python version and modules (e.g. scipy, numpy etc)? Separately, could I have avoided this using virtual environments? Thanks in advance.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • How to bypass firewall to connect to a proxy server?

    - by Bruce
    I am conducting a small experiment on my office network. I have setup a proxy server on my desktop machine (connected to my LAN) and I have volunteers access the internet via my proxy server. Everything is working well. The problem is people cannot connect to the proxy server through their laptops. I asked my network admin and he said the wireless network has a firewall which prevents users from connecting to my proxy. He said I could tunnel the traffic or use SSH though. I am afraid I do not understand fully what is going on. Is there a way by which users connected on the wireless network can connect to my desktop? I am using FreeProxy on Windows as my proxy server: http://www.handcraftedsoftware.org/index.php?page=download FreeProxy allows me to create a SOCKS 4/4a/5 proxy. Is that what I need? Part of the experiment involves logging the URL requests of the users. I am doing a measurement study. So, any solution must allow me to log the URL requests of users. Also, what changes do I need to make in the browser configuration.

    Read the article

  • Apache2, making my site publicly available

    - by Shackler
    Hello, I want to make my apache 2 development server public to the internet, it is a Django based website. Here is my apache2 config: <VirtualHost *:80> Alias /media /home/user/myproject/statics Alias /admin_media /home/myuser/django/Django-1.1.1/django/contrib/admin/media WSGIScriptAlias / /home/myuser/myproject/myproject_wsgi.py WSGIDaemonProcess myproject user=myuser group=myuser threads=25 WSGIProcessGroup myproject </VirtualHost> When I do netstat -lntup I get: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp6 0 0 :::80 :::* LISTEN - tcp6 0 0 :::22 :::* LISTEN - tcp6 0 0 ::1:631 :::* LISTEN - udp 0 0 0.0.0.0:5353 0.0.0.0:* - udp 0 0 0.0.0.0:38582 0.0.0.0:* I connect with ADSL thus I am behind a router. For this I have made my computer DMZ enabled to my machine. What can be the problem? When I try to login with my ip, I get my routers config page, when a friend tries to connect to me from internet, he gets "not authorized".

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • Sharing large (multi-Gb) files with clients

    - by Tim Long
    I wasn't sure if this was the best place for this question, but I think it is squarely in the realm of the IT admin so that's the reason I put it here. We need to share large files (several Gigabytes) with external clients. We need a simple way of reliably and automatically publishing these files so that clients can then download them. Our organization has Windows desktops and a Windows SBS 2011 server. Sharing from our server is probably suboptimal from the client's perspective, because of the low upstream bandwidth of typical ADSL (around 1 Mbps) - it would take all day (9 hours for a 4Gb file) for the client to download the file. Uploading to a 3rd party sever is good for the client but painful for us, because we then have to deal with a multi-hour upload. Uploading to a third-part server would be less problematic if it could be made reliable and automatic, e.g. something like a Groove/SharePoint Workspace, simply drop the file in and wait for it to synchronize - but Groove has a 2Gb limit which is not big enough. So ideally I'd like a service with the following attributes: Must work for files of at least 5Gb, preferably 10Gb Once the transfer is started, it must be reliable (i.e. not sensitive to disconnections and service outages) and completely automatic Ideally, the sender would get a notification when the transfer completes. Has to work with Windows based systems. Any suggestions?

    Read the article

  • Occasional "Could not load file or assembly" in asmx WebService on IIS and DFS

    - by jesse-mcdowell
    We have a handfull of ASMX web service hosted on two identical Windows Server 2003 boxes. The virtual directory for the web services is loaded in a DFS share, both servers point to the same share. We have a load balancer between the internet and the two web servers. At a seemingly random interval (right now about twice per week) when a user tries to access a method on the web service, IIS returns the error: "Could not load file or assembly" for one of the assemblies used in the method call, and will continue reporting it each time the method is called until the app pool is recycled. We haven't found any distinguishable pattern to the problem. This is what I know: the missing assembly varies (but it's always a home-brew assembly) the Web Service method that fails varies there is no noticeable pattern to the times or intervals where the problem appears there are no admin users accessing the servers when the problem appears the failing method will work correctly on one server and fail on the other, even though both point to the same bin folder the problem can always be corrected by recycling the app pool and making no other changes I have enabled the Assembly Binder Log, and know that the binder is looking in the correct location for the file. Our assemblies are compiled for .Net 3.5.

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • win8: access denied to external USB disk; update access rights fails

    - by Gerard
    I use to work with 2 laptops (vista and win7), my work being files on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take win8. 1/ I suspect something changed with access rights, as my external disk suffered some "access denied" problem on win8. I was prompted (by win8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying "disk is not ready". Additonnally, the usb somehow was not recognized anymore. 2/ Back to win7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them i could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under win7, all the folder showed with a lock. 3/ Then back again to win8. Same problem : access denied to my disk + no way to change access rights as it gets stuck "disk is not ready". Now I am pretty sure there is some kind of bug or inconsistence in win8 / win7. I did 2/ and 3/ a few times. At some point, I also got an access denied in win7. I could restore access rigths to the disk to "system" (properties - security - EDIT for full control to group "system" ...). But then I still get the same access right pb on win8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. Now, after I tried for more than 3 days, I am losing my patience with that bloody win8 which I did not want to buy but had no choice. I upgraded win8 with the windows updates available. Does not help. Anybody can help me ?

    Read the article

  • Joining two routers together, but I have no access to the second router, although I know it's IP address and Gateway

    - by JohnnyVegas
    I have temporarily moved into a rented apartment for 4 months, which has wireless. The trouble I am having is that the access points here are wifi only and no RJ45 and I need to use RJ45 to connect some equipment that I am working with. I have purchased an RT-N66U and installed Tomato (shibby ver. 1.28) and successfully replaced the existing access point, but now I want to enable the access point that I have replaced as it links wirelessly to 3 others. Can I plug in a cable from the access point to my RT-N66U and get it to access the internet via my router? I have no access to the existing wireless access point, and don't want to reset it as it's not mine. There is another router situated in the roof somewhere which I also have no access to, but it's supplying my RT-N66U internet and I most definitely have a double-nat, which although isn't the best way of doing things I am limited with what I can do. Any suggestions on routing tables, vlans etc would be helpful, but I have no experience in these fields before - but I know the tomato firmware can cater for this. My router is set to IP 10.0.1.1 and dhcp is 10.0.1.100-200 The wireless access point address was 192.168.1.2 but this was assigned by the router in the roof which has the address 192.168.1.1. There is a cable from this router going to a wall socket which I now have my RT-N66u attached to via the WAN port. I understand it's scruffy and it isn't the way to do things but I have tried to ask for the admin details but as the wireless network is looked after by a third party and nobody knows their details I am stuck with this dilemma. I could buy three wireless access points and replace the existing but this isn't what I want to do, and although I have installed plenty of DD-WRT wireless repeater bridges they simply don't work here for some unknown reason. The phone line here is very noisy too and I don't have the rights to install ADSL in a building that isn't mine, and 3G coverage isn't good enough either. Thanks for your time

    Read the article

  • Troubleshooting wireless connection problem / site survey?

    - by johnnyb10
    I just started in the IT department of a small company (200 users) and it's clear that one of the main problems that is driving everyone crazy is the spotty nature of the wireless connectivity throughout the office, particularly in certain conference rooms. This is a huge problem because the connection often drops during important presentations to clients. I was hired to help ease the load on the existing IT admin, who has done a great job, but is overloaded with many other tasks to deal with. So I would like to try to help out with this wireless issue. I am looking for advice on the best way to solve this problem--a realistic troubleshooting methodology that does not require me to spend any money. So far, I've experimented with Ekahau Heat Mapper, which is free and helps create a site survey. But I'm not exactly sure what I'm looking for or if there are other programs/tools/methods I should try as well. Any advice would be greatly appreciated. [Some background: The wireless setup consists of an HP ProCurve Mobility MSM (710?) controller that controls 10 access points throughout the building. There are three virtual wireless networks configured on the controller: one seems to be a default that cannot be changed, one is for internal employees and authenticates via Active Directory, and the third is a guest network for visitors. When I use HeatMapper, these show up as three different SSIDs, with different MAC addresses, all on the same channel. At first I thought maybe this would cause interference, but this seems to be the way the controller works;apparently, it automatically configures the channels to avoid interference from the other APs on the network.]

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Changing MX records in named zone file

    - by Paul England
    I forgot how all this works. I have a GoDaddy account, using my own DNS and whatnot. I'm having trouble getting my email to work. They said I need to update my MX records. basically, I have the following. 184.168.30.42 is the domain's IP address, obviously. gamengai.com. 14400 IN NS n1 gamengai.com. 14400 IN NS n2 ns1 14400 IN A 184.168.30.42 ns2 14400 IN A 184.168.30.42 gamengai.com. 14400 IN A 184.168.30.42 localhost 14400 IN A 127.0.0.1 ftp 14400 IN A 184.168.30.42 www 14400 IN A 184.168.30.42 mail 14400 IN A 184.168.30.42 subdomain 14400 IN A 184.168.30.42 gamengai.com 14400 IN MX 10 mail Mail doesn't work though... they say to make the following change: 0 smtp.secureserver.net 10 mailstore1.secureserver.net So should the last line point to mailstore1.secureserver.net instead of mail in the last field? What about the other line? I had this working at one time, but it's totally gotten away from me. It's a virtual dedicated server and their support for this stuff is pretty bad... almost as bad as my admin skills since I went the programmer route.

    Read the article

  • Automatically install driver on headless WHSv1 system

    - by Dan Neely
    I have one of the HP Mediasmart Windows Home Server v1 boxes. It's network port appears to have died a few days ago but the system is not giving any other sign of failure: No activity lights activate on either side of the cable when connected to my gigabit switch; when connected to one of my routers 100 megabit ports the lights turn on but it remains unreachable over the network and my router never lists it as among DHCP clients. I bought a USB-ethernet adapter to temporarily get it back online; but the adapter needs a driver to work which I can't install because the system is headless by design (no video out, no PCI/PCIe slots) with admin access only available via the WHS client or remote desktop. Both of those options require network connectivity and are consequently unavailable. I tried copying the drivers to a flash drive; but Windows either didn't look there or none of the drivers provided were suitable (Win8, Win7, or combined XP and Vista). I've been told that a USB WiFi adapter would have the same driver problem.

    Read the article

  • Windows roaming profile when creating a new Windows profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • LDAP not showing secondary groups

    - by Sandy Dolphinaura
    Currently, I have a LDAP server (running ClearOS if that makes any difference) containing a database of users. So, I went and setup LDAP on a couple of my debian VMs, using libpam-ldapd and I discovered this odd problem. My group/user mapping would show up when running getent group but the secondary groups would not show up when running id . Here is my /etc/nslcd.conf # /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldaps://10.3.0.1 # The search base that will be used for all queries. base dc=pnet,dc=sandyd,dc=me # The LDAP protocol version to use. #ldap_version 3 # The DN to bind with for normal lookups. binddn cn=manager,ou=internal,dc=pnet,dc=sandyd,dc=me bindpw Me29Dakyoz8Wn2zI # The DN used for password modifications by root. #rootpwmoddn cn=admin,dc=example,dc=com # SSL options ssl on tls_reqcert never # The search scope. #scope sub #filter group (&(objectClass=group)(gidNumber=*)) map group uniqueMember member

    Read the article

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >