Search Results

Search found 11365 results on 455 pages for 'authorization basic'.

Page 325/455 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Kerberos & signle-sign-on for website

    - by Dylan Klomparens
    I have a website running on a Linux computer using Apache. I've employed mod_auth_kerb for single-sign-on Kerberos authentication against a Windows Active Directory server. In order for Kerberos to work correctly, I've created a service account in Active Directory called dummy. I've generated a keytab for the Linux web server using ktpass.exe on the Windows AD server using this command: ktpass /out C:\krb5.keytab /princ HTTP/[email protected] /mapuser [email protected] /crypto RC4-HMAC-NT /ptype KRB5_NT_PRINCIPAL /pass xxxxxxxxx I can successfully get a ticket from the Linux web server using this command: kinit -k -t /path/to/keytab HTTP/[email protected] ... and view the ticket with klist. I have also configured my web server with these Kerberos properties: <Directory /> AuthType Kerberos AuthName "Example.com Kerberos domain" KrbMethodK5Passwd Off KrbAuthRealms EXAMPLE.COM KrbServiceName HTTP/[email protected] Krb5KeyTab /path/to/keytab Require valid-user SSLRequireSSL <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> However, when I attempt to log in to the website (from another Desktop with username 'Jeff') my Kerberos credentials are not automatically accepted by the web server. It should grant me access immediately after that, but it does not. The only information I get from the mod_auth_kerb logs is: kerb_authenticate_user entered with user (NULL) and auth_type Kerberos However, more information is revealed when I change the mod_auth_kerb setting KrbMethodK5Passwd to On: [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1939): [client xxx.xxx.xxx.xxx] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1031): [client xxx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(735): [client xxx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(645): [client xxx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1110): [client xxx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic What am I missing? I've studied a lot of online tutorials and cannot find a reason why the Kerberos credentials are not allowing access.

    Read the article

  • Recommended motherboard with hardware raid for Linux

    - by luison
    Hi. We want to setup an internal office server for testing jobs (LAMP), email and samba. Only about 5-10 users. We are also considering starting to virtualize, initially by a base Ubuntu Server with Xen or VMWare Open Source server. Our current system runs with a Linux Raid which has worked great but it's always been complicated to recover the boot sector when one the drives fail and therefore I would prefer using now a hardware raid instead, but ideally with some kind of software monitoring. For this reason and considering we don't want to spend a fortune a I would appreciate any comments on the following options. Motherboard with RAID with linux support... which could you recommend. Motherboard + Hardware Raid card... Adaptec does not seem to have great Linux suppport. 3Ware seems to have a tc soft controller which we've used on a hosting company, but hard to find here in Spain. HP Proliant type basic server, which? Dell Small Servers... any good for Linux? Thanks in advance for any feedback.

    Read the article

  • EWS connect to ExchangeServer authentication specifications

    - by dankyy1
    Hi all I'm connecting to ExchangeServer with username,password,doain properities(my code below) but what how to define server uses Kerberos,ntlm or basic authentication e.g? thnx xchangeServiceBinding binding = new ExchangeServiceBinding(); ServicePointManager.ServerCertificateValidationCallback = CertificateValidationCallBack; System.Net.WebProxy proxyObject = new System.Net.WebProxy(); proxyObject.Credentials = System.Net.CredentialCache.DefaultCredentials; if (string.IsNullOrEmpty(credentials.UserName) || string.IsNullOrEmpty(credentials.Password) || string.IsNullOrEmpty(credentials.Domain)) throw new ArgumentNullException("The Crediantial values could not be null or empty."); binding.Credentials = new NetworkCredential(credentials.UserName, credentials.Password, credentials.Domain); if (string.IsNullOrEmpty(serverURL)) throw new ArgumentNullException("The Exchange server Url could not be null or empty."); binding.Url = serverURL; binding.UseDefaultCredentials = true; binding.Proxy = proxyObject; //TO DO:take version over parameter..or configration!! binding.RequestServerVersionValue = new RequestServerVersion(); binding.RequestServerVersionValue.Version = (ExchangeVersionType)Enum.Parse(typeof(ExchangeVersionType), serverVersion);// ExchangeVersionType.Exchange2007_SP1;//.Exchange2010;

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • Address (url) forwarding with Vyatta

    - by Trikks
    Got this kind of noob question i suppose. I got this very basic network setup and need help to set up some address forwarding. As seen in my illustration below all traffic enters via the eth0 interface (85.123.32.23). The external dns is setup to direct all hosts to this ip as well. Now, how on earth do I filter the incoming requests to each box? The Ip's are static! My network layout: I do not wish to solve this by assigning tons of ports etc. In my wishful thinking something like this would be nice :) set service nat rule 10 type destination set service nat rule 10 inbound-interface eth0 set service nat rule 10 destination address ftp.myhost.com set service nat rule 10 inside-address address 192.168.100.20 This way ALL traffic to the address ftp.myhost.com (at eth0) should be routed to the internal ip, 192.168.100.20. Right, is there anyone who could point in some direction? Maybe it's wrong to use nat? Please help me! :)

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • ESXi Server with 12 physical cores maxed out with only 8 cores assigned in virtual machines

    - by Sam
    I have an ESXi 5 server running on a 2-processor, 12-core system with hyperthreading enabled. So: 12 physical cores, 24 logical ones. On this server are 4 Windows 7 VMs, each configured for 2 processors, each running VMware Tools. Looking at my stats in vSphere, my "core utilization" is constantly maxed out. Yes, these machines are working hard, but only 8 cores have been allocated. How is this possible? Should I look into reducing the processor count per machine as in this post: VMware ESX server? I checked to ensure that hardware virtualization is enabled in the BIOS of the machine (a DELL R410). I've also started reading up on configuration, but being a newbie there's a lot of material to catch up on. It also seems I should only bother with advanced settings and pools if I'm really pushing the load, and I don't think that I should be pushing it with so few VMs. I suspect that I have some basic, incorrect configuration setting, but it's also possible that I have some giant misconceptions about virtualization. Any pointers? EDIT: Given the responses I've gotten so far, it seems that this is a measurement problem and not a configuration problem, making this less critical. Perhaps the real question is: How does the core utilization of the server reach a higher percentage than all individual cores' core utilization, and given that this possibility makes the metric useless for overall server load, what is the best global metric for measuring CPU load on hyper-threaded systems?

    Read the article

  • Is there a simple LDAP-to-HTTP gateway out there?

    - by larsks
    We have a local LDAP directory that provides basic contact information about our user community. We would like to integrate this into some third-party hosted services that allow us to implement widgets that run arbitrary Javascript. In order to connect Javascript to our LDAP directory, I would like to set up a simple LDAP-to-HTTP proxy that would accept HTTP GET requests, translate them into an appropriate LDAP query, and respond with directory information as JSON-encoded data. In an ideal world, something like this: GET /[email protected] Would get me something like this: { "cn": "Bob Person", "title": "System Administrator", "sn": "Person", "mail": "[email protected]", "telepehoneNumber": "617-555-1212", "givenName": "Bob" } (And this obviously assumes that the web application has locally configured information about what base DN to use, how to authenticate, etc). I guess I could write one...but surely something like this already exists? UPDATE The consensus seems to be that there isn't a pre-existing solution out there and that I should just get off my lazy derriere and write one. So I did, and it's here. It's not especially pretty, but it works for my prototyping and I figure maybe someone else will find it useful someday.

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Why does my Belkin wireless router has eMule port open?

    - by Jeremy Powell
    I have a Belkin F6D4230-4 v1 router. When I port scan it with nmap I get the following: $ sudo nmap -sS -A -T5 192.168.2.1 -p- Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-17 11:40 CDT Interesting ports on 192.168.2.1: Not shown: 65532 closed ports PORT STATE SERVICE VERSION 80/tcp open http Belkin 2307 wifi router http config (IP_SHARER httpd 1.0) |_ html-title: '+i1+' 4661/tcp filtered unknown 4662/tcp filtered edonkey MAC Address: 00:22:75:5D:52:D8 (Belkin International) Device type: WAP|broadband router|firewall|printer|specialized|webcam Running (JUST GUESSING) : Linksys embedded (95%), TRENDnet embedded (95%), Netgear embedded (92%), Canon embedded (89%), On Time RTOS (89%), Symantec embedded (89%), D-Link embedded (86%), Polycom embedded (85%) Aggressive OS guesses: Linksys WRT54GC or TRENDnet TEW-431BRP wireless broadband router (95%), TRENDnet TW100-BRF114 broadband router (95%), Netgear FR114P ProSafe VPN firewall (92%), Canon PIXMA MX850 printer (89%), On Time RTOS (89%), Symantec Firewall/VPN 100 (89%), D-Link DI-714P+ wireless broadband router (86%), Polycom ViewStation video conferencing system (85%) No exact OS matches for host (test conditions non-ideal). Network Distance: 1 hop Service Info: Device: WAP OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.57 seconds Why are the 4461 and 4462 ports open? This is a basic, out-of-the-box installation.

    Read the article

  • Migrating from "partial" Exchange 2003 to full Exchange 2003 usability

    - by TheCleaner
    I have a client that is using Exchange 2003 on SBS 2003 R2, but only for calendar sharing and contacts sharing. Their email is still coming to their clients via a POP3 account on each client's Outlook. I'd like to move them over to using Exchange for both email and the other things they are utilizing it for now. Can you folks guide me in the right direction? The setup: external domain is akin to domain.com (and is where they get their POP3 email from now) internal domain is akin to domain.local only simple hardware firewall (no ISA) static external IP is available to use My "assumptions": Setup SMTP default connector in Exchange for their existing external domain Have their existing email backed up to PST files (just in case) Setup the new MX records to point domain.com to the static external IP I'm a little confused how I'm going to setup their existing Exchange accounts with the proper SMTP address though. Right now it is just [email protected]. Do I just need to modify or create a new recipient policy? Are there other steps involved that I'm missing? Anyone with a walkthrough or even a basic "steps" is fine. I'm fairly used to Exchange 03, but I've been on Exchange 07 for a while now so going back is the weird part...plus I don't know what issues Exchange 03 on SBS has versus the normal "version". Thanks for all the help!

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Is it possible to be a Linux professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • Windows 7 AIK help

    - by microchasm
    I've just got in a few Windows 7 (64, Windows 7 Professional) machines, and I'm trying to get the AIK 2010 working. I've set up one of the machines, and installed AIK and MDT on it. I've followed the directions at http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx about 3 times now, and also tried the built-in .chm help files that came with AIK. In AIK I grab the install.wim image off the OEM cd, at which point it asks me which version (I select Professional). I follow the rest of the instructions creating a new Autounattend answer file, and fill in the various bits and pieces according the the step-by-step guides. I verify the Answer file (No warnings or errors), save it, and copy it onto a USB drive. I go to another machine, insert it's OEM Win7 Disk, and power on. I've set BIOS to boot from CD, so it goes directly into the installation. Once The files are loaded, and Setup starts, it immediately asks which version to install (Home basic, Home Premium, Professiona, Ultimate). Ugh, I thought it was supposed to be an automated install, and that selecting the Version when opening the .wim file would answer this question. I looked for an option to set which version to be installed on the net, in the help, and in AIK itself; to no avail. Anyway, just for laughs I select Professional,and hit continue. It copies files for about 10 seconds, then fails with the following error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information. [OK]". Clicking OK reboots the box, and obviously there are no log files because the OS isn't installed. It is a Dell Optiplex 380 , Intel Core Duo 2.93 GHzl 4 GB RAM, 64 Bit. Any help would be REALLY appreciated.

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >