Search Results

Search found 22998 results on 920 pages for 'supervised users'.

Page 313/920 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Nagios 403 forbidden, indexes?

    - by Georgi
    installed nagios under freebsd 9, but can't get the right way to be public in browser (from other pc's). I think that the problem is in the indexes or that there is not index file (instead main.php). Apache says that syntax is ok. The permissions of the dir are 777. The logs print Directory index forbidden by Options directive: /usr/local/www/nagios/. This is my configuration: ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios /usr/local/www/nagios/ <Directory /usr/local/www/nagios> Options +Indexes FollowSymLinks +ExecCGI AllowOverride Indexes AuthConfig FileInfo Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options +ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> I think that the problem is in idexes, maybe? When I remove the options it's public and available but lists the files and says that idnexes are forbidden..

    Read the article

  • Munin node not listing any plugins on new Fedora 14 installation

    - by Dave Forgac
    I have just installed munin-node from the base repo on Fedora 14 and then started it. I found that my munin server is not able to collect data from this node so I tried connecting via telnet to test. When connecting via telnet I see that no plugins are listed: [dave@host ~]# telnet localhost 4949 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # munin node at host.example.com list quit Connection closed by foreign host. [dave@host ~]# I did not modify anything after the installation. The munin-node.conf is allowing connections from 127.0.0.1 and the default set of plugins in /etc/munin/plugins/ are symlinked to the plugins in /usr/share/munin/plugins/. Here is the working output of the telnet test of the 'list' command should look like (this is on a Fedora 13 host): [dave@www ~]$ telnet localhost 4949 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # munin node at www.example.com list apache_accesses apache_processes apache_volume cpu df df_inode entropy forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts iostat iostat_ios irqstats load memory munin_stats mysql_ mysql_bytes mysql_innodb mysql_queries mysql_slowqueries mysql_threads netstat open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat yum quit Connection closed by foreign host. [dave@www ~]$ Edited to show output of munin-node-configure: [root@host ~]# munin-node-configure Plugin | Used | Extra information ------ | ---- | ----------------- acpi | no | amavis | no | ... http_loadtime | no | if_ | yes | eth1 eth0 if_err_ | yes | eth0 eth1 ifx_concurrent_sessions_ | no | interrupts | yes | ... uptime | yes | users | yes | varnish_ | no | vserver_resources | no | yum | yes | zimbra_ | no | Any suggestions on what to check next?

    Read the article

  • pfsense single MAC is listed with several IP's in ARP table

    - by Tillebeck
    I have this problem: arp table filling up But I am quite sure that I cannot blame Kaspersky. Scenarie: a user plugs his computer in. He waits and waits but are getting no IP by DHCP. Then he is told there is an IP conflict... He end up assigning himself a static IP to access the net In the ARP table of the router I see: 192.168.24.144 00:16:41:42:3c:9e Lenovo LAN 192.168.24.145 00:16:41:42:3c:9e Lenovo LAN 192.168.24.181 00:16:41:42:3c:9e Lenovo LAN 192.168.24.150 00:16:41:42:3c:9e Lenovo LAN 192.168.24.151 00:16:41:42:3c:9e Lenovo LAN 192.168.24.152 00:16:41:42:3c:9e Lenovo LAN 192.168.24.156 00:16:41:42:3c:9e Lenovo LAN 192.168.24.157 00:16:41:42:3c:9e Lenovo LAN 192.168.24.159 00:16:41:42:3c:9e Lenovo LAN 192.168.24.160 00:16:41:42:3c:9e Lenovo LAN 192.168.24.130 00:16:41:42:3c:9e Lenovo LAN 192.168.24.132 00:16:41:42:3c:9e Lenovo LAN 192.168.24.164 00:16:41:42:3c:9e Lenovo LAN 192.168.24.137 00:16:41:42:3c:9e Lenovo LAN 192.168.24.140 00:16:41:42:3c:9e Lenovo LAN 192.168.24.206 00:16:41:42:3c:9e Lenovo LAN The last .206 is the static address he gave himself. Several users descripe the exact same problem. It started after removing some filters in the switches, så all users are on a LAN and can see each other. Before, when filters blocked access to each others comptuers no one reported this kind of behavior. Any idéeas?

    Read the article

  • Cisco ASA: Routing packets based on where the connections started from

    - by DrStalker
    We have a Cisco ASA 5505 (version 8.2(2)) with three interfaces: outside: IP address 11.11.11.11, this is the default route inside: IP address 10.1.1.1, this is the local subnet newlink: 22.22.22.22, this is a new internet connection. We need to move VPN users from the 11.11.11.11 address to the 22.22.22.22 address, and we're using SSH on the ASA as to test and sort out the routing. The problem we have is this: If we define a particular IP as being on a static route out the newlink interface then it can SSH to 22.22.22.22 fine. If we do not define a static route then the traffic hits the ASA, but the return traffic does not come back over newlink; presumably it gets sent over the outside interfcae as that is the default route. We can't define a static route for each remote endpoint because there are dialup VPN users, who obviously change IP a lot What we need to do is configure the ASA so if a connection comes in on the newlink interface then the outgoing packets for that go over the newlink interface, not the default route. With iptables this should be do-able by marking the connection and doing mark-routing, but what is the equivalent for a Cisco ASA?

    Read the article

  • Setting Up VirtualHosts for a local RubyOnRails Application

    - by chris Frisina
    I want to set up a VirtualHost so that when I type the project name in the address bar, it goes to the home page of the project. httpd-vhosts.conf files in both XAMPP configuration and apache configuration: <VirtualHost project> ServerAdmin [email protected] ServerName project DocumentRoot /Users/path/to/project/public <Directory "/Users/path/to/project/public"> Options Includes FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> RewriteEngine On RewriteOptions inherit </VirtualHost> I have also tried with the path directly to the project folder, and not the public folder of the project. the httpd.conf of XAMPP and apache : # Virtual hosts Include /Applications/XAMPP/etc/extra/httpd-vhosts.conf the /etc/hosts file: 127.0.0.1 other1 127.0.0.1 other2 127.0.0.1 project I have tried : 127.0.0.1 project 127.0.0.1:3000 project 127.0.0.1 project:3000 I have also restarted the rails server and the XAMPP server many times after changes. I have it working so that the project:3000/ works, but how do I get it so that I dont have to specify the port number? Notes: All other VHosts are working well. Rails 3.2.8 (willing to change) Ruby 1.9.3 WEBrick server

    Read the article

  • Control Panel for MySQL and PostgreSQL server

    - by jfreak53
    I am looking for a control panel for a CentOS system that will allow me to add user's that can control their "own" MySQL and PostgreSQL Databases. I don't want to have to spend the $25 a month for cpanel on a dedicated to do this. Plus cPanel comes with all the rest like webserver and email that I don't need these to have. Basically I want to be able to create users that can create their own databases and only see those that they have created. I want to be able to control their disk space as with most panels. They need to be able to create their own DB users as well. Kloxo won't work as it doesn't natively support PostgreSQL. I tried straight PHPMyAdmin, but it won't let the user's create their own DB's unless they can also see everyone else's. VirtualMin I just can't get to work at all! ha ha I installed it an though it works great for itself if a user signs into Usermin they can see all DB's. If they sign into PHPMyAdmin (which basically means any program that directly connects to MySQL) they see all DB's. If they login to virtualmin then yes, they only see their's. But that won't work. I can't seem to think of another way to do this. I can use webmin and usermin directly but PHPMyAdmin again let's the user's I create either see only one DB and create none, or see all DB's. So that sound's like a permission problem in MySQL and PostgreSQL.

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • LDAP ACLs with ldapmodify & .ldif file grand user access only

    - by plaetzchen
    I want to change the settings my new LDAP server let only users of the server read entries and not anonymous. Currently my olcAccess looks like this: olcAccess: {0} to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=example,dc=com" write by * read I tried to change it like so: olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=exampme,dc=com" write by users read But that gives me no access at all. Can someone help me on this? thanks UPDATE: This is the log read after the changes mentioned by userxxx Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 ACCEPT from IP=87.149.169.6:64121 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 closed Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 ACCEPT from IP=87.149.169.6:64122 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 closed pbrechler should be a valid user but has no system user (we don't need it) admin does't work also List item

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • Recommendations for secure business collaboration tools

    - by Michael Prescott
    I'm searching for a secure and easy way for business partners to collaboratively edit and exchange documents, share calendars, create schedules, and assign tasks. I speculate that the ideal collaboration environment or work-flow would actually involve several technologies and services. My co-workers and I have tried a variety of things from Google Apps to Wiki's, but nothing feels very fluid or complete. I suppose defining what we need and our constraints is probably in order: collaboratively edit basic text documents and spreadsheets exchange documents like flow-charts, graphs, and files generated by our other desktop applications, but not source code assign tasks to each other and ourselves and track the history of those tasks easily see when relevant documents have been modified since last viewing and ability to easily push notifications to relevant workers (a clean front page that shows updates would probably suffice) provide limited access to contract workers and guests users if a remote user system is compromised (keystroke logger or other spyware) we don't want the criminal to be able to gain access to all business documents (processes, trade-secrets, customer lists, etc.) simply because they gained access to a single Google account (or whatever web service) Cannot be a difficult to administer VPN infrastructure Cannot cost more than $100 per month (yeah, money is tight) Needs to support up to 25 users We can host our own web applications, but it must be low maintenance solution

    Read the article

  • Port forwarding for samba

    - by EternallyGreen
    Alright, here's the setup: Internet - Modem - WRT54G - hubs - winxp workstations & linux smb server. Its basically a home-style distributed internet connection setup, except its at a school. What I want is remote, offsite smb access. I figured I'd need to find out which ports need forwarding and then forward them to the server on the router. I'm told in another question on SF that multiple ports will need forwarding, and it gets somewhat complicated. One of the things I need to know is which ports require forwarding for this, and what complications or vulnerabilities could arise from this. Any additional information you think I should have before doing this would be great. I'm told SMB doesn't support encryption, which is fine. Given I set up authentication/access control, all this means is that once one of my users authenticates and starts downloading data, the unencrypted traffic could be intercepted and read by a MITM, correct? Given that that's the only problem arising from lack of encryption, this is of no concern to me. I suppose that it could also mean a MITM injecting false data into the data stream, eg: user requests file A, MITM intercepts and replaces the contents of file A with some false data. This isn't really an issue either, because my users would know that something was wrong, and its not likely anyone would have incentive to do this anyway. Another thing I've been informed of is Microsoft's poor implementation of SMB, and its crap track record for security. Does this apply if only the client-end is MS? My server is linux.

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • Domain authentication over OPEN wireless pre-logon (Windows 7 Pro) - No logon servers avail

    - by Shadow00Caster
    I have a plethora of laptops that are joined to an AD domain. I have an enterprise wireless system setup, the users of these laptops will be using an OPEN unsecured SSID which will ultimately have a captive portal that uses Radius-AD auth and firewall rules to allow access pre-captive portal auth to the proper ip's/ports of DC's etc for auth etc. I already have other laptops/users connecting to another SSID with 802.11x and SSO, all works perfectly pre-logon etc. My problem is with this open network, for some reason I cannot get the machines to auth to AD. The laptops connect to the wireless network, I confirm this on the controller and can ping the laptop at startup. I sharked the wires on the 2 DC's that these machines auth to, I can see a DNS SOA update from a laptop im testing with and can ping that test laptop from both DC's. When I try to logon, "There are currently no logon servers available to service the logon request." The shark shows no incoming connections to either DC even though the laptop is connected and pingable. Any help is greatly appreciated.

    Read the article

  • Exchange ActiveSync Exception

    - by Dmeglio
    One of the users on my network is having an issue with his iPhone syncing via ActiveSync. Overall it's working, but every now and then he gets a "Synchronization with your iPhone failed for 3 items." I asked him to go into OWA and turn on the Mobile Phone logging. I looked through the logs and this is what stood out to me: SyncCommand_GenerateResponsesXmlNode_AddChange_Exception : Microsoft.Exchange.Data.Storage.PropertyErrorException: Property: [{00062008-0000-0000-c000-000000000046}:0x8501] ReminderMinutesBeforeStartInternal, PropertyErrorCode: NotFound, PropertyErrorDescription: . at Microsoft.Exchange.Data.Storage.PropertyBag.ThrowIfPropertyError(StorePropertyDefinition propertyDefinition, Object propertyValue) at Microsoft.Exchange.Data.Storage.StoreObject.GetProperty(PropertyDefinition propertyDefinition) at Microsoft.Exchange.Data.Storage.MeetingMessage.get_Item(PropertyDefinition propertyDefinition) at Microsoft.Exchange.AirSync.SchemaConverter.XSO.XsoMeetingRequestProperty.get_NestedData() at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncMeetingRequestProperty.InternalCopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncProperty.CopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncDataObject.CopyFrom(IProperty srcRootProperty) at Microsoft.Exchange.AirSync.SyncCollection.ConvertServerToClientObject(ISyncItem syncItem, XmlNode airSyncParentNode, SyncOperation changeObject) at Microsoft.Exchange.AirSync.SyncCollection.GenerateCommandsXmlNode(XmlDocument xmlResponse, IAirSyncVersionFactory versionFactory, String deviceType, ProtocolLogger protocolLogger, MailboxLogger mailboxLogger) Does anyone have any idea what might cause this? We have 4 iPhone users connected to our Exchange via ActiveSync. Right now, this seems to be the only user experiencing this issue. I'd appreciate any help anyone can provide. Thanks.

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • IPTables: NAT multiple IPs to one public IP

    - by Kaemmelot
    I'm looking for a way how to nat 2 or more inner IPs (in my case xen doms) to one outer IP. I tried to use iptables -t nat -A PREROUTING -d 123.123.123.123 -j DNAT --to 1.2.3.4 --to 1.2.3.7 iptables -t nat -A POSTROUTING -s 1.2.3.4 -j SNAT --to 123.123.123.123 iptables -t nat -A POSTROUTING -s 1.2.3.7 -j SNAT --to 123.123.123.123 And got an error: iptables v1.4.14: DNAT: Multiple --to-destination not supported Try `iptables -h' or 'iptables --help' for more information. I found this in the manpage: Later Kernels (= 2.6.11-rc1) don't have the ability to NAT to multiple ranges anymore. So my question is: Why is it not possible anymore and is there a workaround? Maybe I should use an other method I don't know yet? EDIT: The idea is to use the system like a router, so I have one address but multiple users behind. The problem is I don't know which connection reffers to a user (for example 1.2.3.4). But I know, they all have different ports open for incomming traffic. So my solution (for DNAT) would be to nat all incoming connections to all users and filter all unused ports, so the connection goes to one single user. For outgoing traffic I would use iptables -A FORWARD -i eth0 -d 1.2.3.4 -m state --state ESTABLISHED,RELATED -j ACCEPT

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >