Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 434/643 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • Add IPv6 support to DirectAdmin server

    - by George Boot
    I just set up an new DirectAdmin, and I want to prepare it for IPv6 use. My ISP have gave me an range of IPv6 addresses that I can use. Lets say that address is 2a01:7c8:**:1f::. My neworkadapter user DHCP to resolves its IP-addresses. When i type ifoncig eth0 I get the following result: eth0 Link encap:Ethernet HWaddr 52:**:**:**:ce:f3 inet addr:37.**.**.44 Bcast:37.**.**.255 Mask:255.255.255.0 inet6 addr: 2a01:7c8:****:1f::/64 Scope:Global inet6 addr: fe80::5054:ff:fe87:cef3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38941 errors:0 dropped:0 overruns:0 frame:0 TX packets:29439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779534 (3.6 MiB) TX bytes:5089379 (4.8 MiB) As you can see, I have an IPv6 address set, but I can't ping6 an IPv6 host. I get the error: connect: Network is unreachable. I decided that I needed an gateway, so I tryed to add one: ip -6 route add default via 2a01:7c8:****::1 dev eth0 (2a01:7c8:**::1 is the gateway of my ISP). But it trows an error: RTNETLINK answers: No route to host. Does somebody know what to do, and how to solve this issue? Thanks a lot!

    Read the article

  • Rebuild Apple RAID set

    - by Clinton Blackmore
    We have a Mac Pro tower with an Apple RAID card in it using third party drives. When one drive failed, we replaced it and the RAID 5 set was nearly done rebuilding when the computer was rebooted. It did not come back up. We are now booting up off of a different internal volume, and have three (third-party) drives of identical spec (including revision and firmware) in the box. One of the drives is a global spare; the other two are recognized as belong to a RAID set but are in "Roaming" mode. The intention is to recreate the three-drive RAID set using the data on the two drives that are good. When we tell the system to create a RAID 5 using the three drives, it tells us that it'll create a RAID set but everything will be lost. There are no obvious options to rebuild a RAID using the two good drives and incorporating the third drive in Apple's RAID Utility, and we've looked through the options for the raidutil command. Fortunately, all important data is backed up, and we can rebuild from scratch, but, is there any way to make the RAIDset work again?

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • Server 2003 Terminal Services Printers not redirecting, no sessions created.

    - by mikerdz
    Ok, odd scenario on a Windows Server 2003 Server Standard running as Terminal Server. Friday, installed 2 new Windows 7 machines to replace older XP machines. After adding these machines and their local printers, none of the otehr 16 Windows 7 machines can redirect printing to the server. I have checked Global Policy on domain controller, nothing is being blocked. In Terminal Services Manager, the client settings are set to User Client Settings. On RDP client, port redirection is enabled. I have tried disabling the Use Client Settings option and manually selected the options for print redirection and default printer connection, but still does not work. After some reaserching, I found this MS article: http://support.microsoft.com/kb/2492632 I went ahead and added the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Wds\rdpwd\fEnablePrintRDR DWORD that the article references and set it to "1" to enable the option. I restarted the server, but still would not print. I am getting quite desperate with this issue because nothing seems to have changed when installing the two new clients and printers. I uninstalled the print drivers for the printers from the server. I have even gone as far as connecting each of the printers manually via UPD (\computername\printer) but even thought it works, it prints awfully slow. Please help!!!!

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Have only read access to Samba

    - by Tahir Malik
    Hi I've been struggling a lot with Samba on Centos 5.5 lately. I develop in Windows 7 and send files through scp (ant task), but it's to slow and wanted to setup thoroughly samba. After installing and following some guides I've done the following: Disable firewall (iptables) Disable SelLinux (didn't do that at the start, but didn't help either) setup my smbusers file to map my windows user to root (root = "Tahir Malik" -- works) added a current user mitco to the sambapassdb with the command smbpasswd -a mitco , because the windows user had only read access So both the users have read access to my share. Here is my smb.conf snippit: [global] workgroup = MITCO server string = Samba Server Version %v netbios name = centos ; interfaces = lo eth0 192.168.12.2/24 192.168.13.2/24 ; hosts allow = 127. 192.168.12. 192.168.13. [alf4] comment = Alfresco 4 path = /opt read only = no valid users = mitco, mitco force user = root force group = root admin users = mitco , mitco writeable = yes ; browseable = yes What also maybe important is that the /opt is only writable by root, but that shouldn't matter because I use the force user and group or admin users. The log file : [2012/09/29 07:43:44, 0] smbd/server.c:main(958) smbd version 3.0.33-3.39.el5_8 started. Copyright Andrew Tridgell and the Samba Team 1992-2008 [2012/09/29 07:43:59, 1] smbd/service.c:make_connection_snum(1085) mitco-tahir (192.168.13.1) connect to service alf4 initially as user root (uid=0, gid=0) (pid 5228)

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • How should I configure my Active Directory servers so that if one goes down, users are not kicked off SQL?

    - by Matty Brown
    Today, we shut down one of our Active Directory servers during office hours to check the loading on a UPS. Since all the server did was provide Active Directory in a separate building incase the main building caught fire, or whatever, we didn't think it would have any effect on our users. Seconds after the server was shut down, we had a dozen phone calls from users experiencing this issue:- [Microsoft SQL Server Login] SQLState: '28000' [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed. The login is from an untrusted domain and cannot be used with authentication. Once we realized what had happened, we quickly rebooted the down Active Directory server. Problem solved. But why did this happen. And what if one day a server has a breakdown and is offline for hours, or days? Shouldn't the other Active Directory servers in the domain service authentication requests without disruption to users? We have 3 Windows Server 2003 Standard servers running Active Directory as Domain Controllers with Global Catalogs, all physically located on the same network at Gigabit speeds. I believe the domain was originally Windows Server 2000, or maybe even NT 4.0. Could the issue be to down to old Group Policies inherited from these old server OS's, or some default setting in Active Directory that needs changing?

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • Bind a key to a commandline command in Mac OS X?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard? Should I be looking at doing this with AppleScript? (I haven't used that since the Hypercard days ...)

    Read the article

  • Cisco ASA 8.2 ACL For NAT

    - by javano
    Sadly I have gone back in time to ASA 8.2(5)33 which I am not so familiar with. I have configured NAT between two interfaces but traffic isn't passing becasue I can't get the ACL to work; (The full config which isn't very big is here but to keep this post tidy I have just pasted the important parts below); interface Ethernet0/0 switchport access vlan 108 ! interface Ethernet0/6 switchport access vlan 104 ! interface Ethernet0/7 switchport access vlan 105 ! interface Vlan104 description BUILDING2 nameif BUILDING2 security-level 0 ip address 10.104.0.1 255.255.255.0 ! interface Vlan105 description BUILDING1 nameif BUILDING1 security-level 0 ip address 10.105.0.1 255.255.255.0 ! interface Vlan108 description Main LAN VLAN nameif lan security-level 0 ip address 172.22.0.215 255.255.255.0 ! object-group network obj_net_Remote_Hosts network-object host 111.111.111.3 network-object host 111.111.111.65 object-group network obj_host_pc1_eth1 network-object host 10.104.0.111 object-group network obj_host_pc2_eth1 network-object host 10.104.0.112 object-group network obj_host_pc3_eth1 network-object host 10.104.0.106 object-group network obj_host_pc4_eth1 network-object host 10.104.0.107 object-group network obj_net_PCs description IPs of PCs group-object obj_host_pc1_eth1 group-object obj_host_pc2_eth1 group-object obj_host_pc3_eth1 group-object obj_host_pc4_eth1 access-list acl_NAT_pc1_91 extended permit tcp host 10.104.0.111 host 111.111.111.3 eq 8101 access-list acl_Permit_PCs extended permit tcp object-group obj_net_PCs object-group obj_net_Remote_Hosts eq 8101 ! global (BUILDING1) 11 111.111.222.91 netmask 255.255.255.255 nat (BUILDING2) 11 access-list acl_NAT_pc1_91 access-group acl_Permit_PCs in interface BUILDING2 route BUILDING1 111.111.111.3 255.255.255.255 10.105.0.2 1 route BUILDING1 111.111.111.65 255.255.255.255 10.105.0.2 1 When I try and connect from PC1 to ip 111.111.111.3 I see the following error logged on the ASA console; %ASA-2-106001: Inbound TCP connection denied from 10.104.0.111/38495 to 111.111.111.3/8101 flags SYN on interface blades What the duce!

    Read the article

  • How to add a privilege to an account in Windows?

    - by mark
    Given: A VM running Windows 2008 I am logged on there using my domain account (SHUNRANET\markk) I have added the "Create global objects" privilege to my domain account: The VM is restarted (I know logout/logon is enough, but I had to restart) I logon again using the same domain account. It seems still to have the privilege: I run some process and examine its Security properties using the Process Explorer. The account does not seem to have the privilege: This is not an idle curiousity. I have a real problem, that without this privilege the named pipe WCF binding works neither on Windows 2008 nor on Windows 7! Here is an interesting discussion on this matter - http://social.msdn.microsoft.com/forums/en-US/wcf/thread/b71cfd4d-3e7f-4d76-9561-1e6070414620. Does anyone know how to make this work? Thanks. EDIT BTW, when I run the process elevated, everything is fine and the process explorer does display the privilege as expected: But I do not want to run it elevated. EDIT2 I equally welcome any solution. Be it configuration only or mixed with code. EDIT3 I have posted the same question on MSDN forums and they have redirected me to this page - http://support.microsoft.com/default.aspx?scid=kb;EN-US;132958. I am yet to determine the relevance of it, but it looks promising. Notice also that it is a completely coding solution that they propose, so whoever moved this post to the ServerFault - please reinstate it back in the StackOverflow.

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • What do the readonly attributes in diskpart really mean?

    - by marzipan
    I am wondering exactly what the meaning is of the "Read-only" disk and volume attributes that you can twiddle in diskpart on Windows 7. I am trying to set up an external USB drive as an installation medium for my own software, so I'd like to protect it against casual or inadvertent changes by users who it is given to, so they don't screw up the installation files they might need in the future. From what I can tell by experimentation with diskpart, the volume read-only attribute is actually stored on the physical disk somewhere, because I can set it and it shows up when I take the drive to another machine. This is great because my users can't (easily) change any of the files on the volume, or format it from Windows explorer. However, the disk read-only attribute seems to be just an aspect of how the current machine is accessing the drive. When I set it I can no longer delete the volume in the disk via Disk Management, but when I take the drive to another machine, the attribute is no longer set and in Disk Management I can delete the volume on the disk. I guess I'm not that worried about my users doing that, but I am annoyed that I don't understand what these attributes are really doing. Another thing that I don't understand is that the "volume" read-only attribute actually seems to be global to the disk - if I have two volumes on the disk, and I set the readonly flag on one of them, then it gets set on the other one too. ?!? I have the feeling I'm not searching for the right docs - all I'm finding is diskpart docs that give the syntax for twiddling these attributes, not what they really mean. Any pointers would be very welcome! Thanks, Asa

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • How (in)secure are cell phones in reality?

    - by Aron Rotteveel
    I was recently re-reading an old Wired article about the Kaminsky DNS Vulnerability and the story behind it. In this article there was a quote that came across a little bit exaggerated to me: "The first thing I want to say to you," Vixie told Kaminsky, trying to contain the flood of feeling, "is never, ever repeat what you just told me over a cell phone." Vixie knew how easy it was to eavesdrop on a cell signal, and he had heard enough to know that he was facing a problem of global significance. If the information were intercepted by the wrong people, the wired world could be held ransom. Hackers could wreak havoc. Billions of dollars were at stake, and Vixie wasn't going to take any risks. When reading this I could not help but feel like it was a bit blown-up and theatrical. Now, I know absolutely nothing about cell phones and the security problems involved, but to my understanding, cell phone security has quite improved over the past few years. So my question is: how insecure are cell phones in reality? Are there any good articles that dig a bit deeper into this matter?

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • lighttpd with multiple IPs, each with a UCC certificate and many hostnames

    - by Dave
    I'd like to get lighttpd working with UCC certificates, but I can't seem to figure out the correct syntax. Essentially, for each IP address, I have one UCC certificate and a bunch of hostnames. $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "mywebsite.com" { server.document-root = /var/www/mywebsite.com/htdocs" } The above code works fine for one hostname, but as soon as I try to set up another hostname (note the same SSL cert): $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "anotherwebsite.com" { server.document-root = /var/www/anotherwebsite.com/htdocs" } ...I get this error: Duplicate config variable in conditional 6 global/SERVERsocket==10.0.0.1:443: ssl.engine Is there any way I can put a conditional so that only if ssl.engine is not already enabled, enable it? Or do I have to put all my $HTTP["host"]s inside the same $SERVER["socket"] (which will make config file management more difficult for me) or is there some entirely different way to do it? This has to be repeated for multiple IPs too (so I'll have a bunch of SERVER["socket"] == 10.0.0.2:443" etc), each with one UCC cert and many hostnames. Am I going about this the wrong way entirely? My goal is to conserve IP addresses when I have many websites that are related and can share an SSL certificate, but still need their own SSL-accessible version from the appropriate hostname (instead of a single secure.mywebsite.com).

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >