Search Results

Search found 88020 results on 3521 pages for 'server fault'.

Page 144/3521 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Help about NAT with virtual server

    - by Thanh Tran
    I have a dedicated server running Linux CentOS 5.3 with 2 IP addresses. I've installed a virtual machine using VMware Server. The host and the guest have a host-only network. Now I want to map the 2nd IP address to the virtual machine so that it can run as a second dedicated server for me. Here is what I do: modprobe iptable_nat echo "1" > /proc/sys/net/ipv4/ip_forward iptables -t filter -A FORWARD -s 192.168.78.128 -d 64.85.164.184 -j ACCEPT iptables -t nat -A PREROUTING -d 64.85.164.184 -i eth0 -j DNAT --to-destination 192.168.78.128 iptables -t nat -A POSTROUTING -s 192.168.78.128 -o eth0 -j SNAT --to-source 64.85.164.184</p> But it not working as intended. What is the matter?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Node.js server crashes , Database operations halfway?

    - by Ranadeep
    I have a node.js app with mongodb backend going to production in a week and i have few doubts on how to handle app crashes and restart . Say i have a simple route /followUser in which i have 2 database operations /followUser ----->Update User1 Document.followers = User2 ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation What happens if there is a server crash(due to power failure or maybe the remote mongodb server is down ) like this scenario : ----->Update User1 Document.followers = User2 SERVER CRASHED , FOREVER RESTARTS NODE What happens to these operations below ? The system is now in inconsistent state and i may have error everytime i ask for User2 followers ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation Also please recommend good logging and restart/monitor modules for apps running in linux

    Read the article

  • SQL Solstice

    - by andyleonard
    Introduction My friends in North Carolina have decided to create a new event called SQL Solstice . Details: 18 - 20 Aug 2011 Holiday Inn Brownstone & Conference Center 1707 Hillsborough Street - Raleigh, NC 27605 Toll Free 800-331-7919 18 Aug - A Day of Deep Dives ($259) Day-long presentations delivered by folks with real-world, hands-on experience. Louis Davidson on Database Design Andrew Kelly on Performance Tuning Jessica M. Moss on Reporting Services Ed Wilson on Powershell (me) on SSIS 19...(read more)

    Read the article

  • What could cause my LAN Pings be greater than 100ms?

    - by James Holland
    I have 2 servers (Both: Windows Server 2008, Dual Xeon 2.8Ghz, 32GB RAM, 8 x 15k SAS Drives). One of them is a DC / Web server / Exchange Server, the other is a SQL Server (2008). I have a 48 port Netgear GS748T Gigabit switch. When I ping from server to server, I get ping times <1ms, great, but when I ping from a PC, I get varying pings from the occasional <1ms to 500ms!! If I log into either server and look at Task Manager, CPU usage peaks at 20%, memory usage is 100%, but I am led to believe this is normal as Exchange will just use as much as you have, and release it when requested. Network usage peaks at 1%. I really don't understand how the ping can vary that much. I know I am giving very little info, but this is all I know, I apologise, but can anyone help? In response to question, I have pinged by both IP address and hostname, no difference in ping times.

    Read the article

  • How To Speed Up Adding Column To Large Table In Sql Server

    - by Chris
    I want to add a column to a Sql Server table with about 10M rows. I think this query would eventually finish adding the column I want: alter table T add mycol bit not null default 0 but it's been going for several hours already. Is there any shortcut to get a "not null default 0" column inserted into a large table? Or is this inherently really slow? This is Sql Server 2000. Later on I have to do something similar on Sql Server 2008.

    Read the article

  • Which version of Windows Server 2008?

    - by dragonmantank
    One of the projects I'm working on is looking like we're going to need to migrate from CentOS 5.4 over to something else (we need to run Postgresql 8.3+, and CentOS/RHEL only support 8.1), and one of the options will be Windows Server. Since 2008 R2 is out that's what I'm looking at. I'll need to run Postgres and Tomcat and don't really require anything that Windows has like IIS (if I can run Server Core, even better!). The other kicker is it will be virtualized through VMWare ESXI 4.0 so that we have three separate boxes: development, Quality, and Production servers. From a licensing standpoint though, and I good enough with just the Web Server edition? Am I right in assuming that will be three licenses? Or should I just jump up to Enterprise so that I get 4 VM licenses?

    Read the article

  • Troubleshooting Windows Server 2012 storage spaces

    - by Iravanchi
    I'm trying the new "Storage Pools" feature on Windows Server 2012, and I've created several disks on the pool. When I restart the server, some of the disks (two, out of four) do not attach automatically, and don't show up in the list of disks. I can go to Server Manager File and Storage Services Storage Pools, and the faulty disks are listed with a yellow triangle beside them. The drive health in the properties are "unknown". But if I right-click and choose attach, the disk comes online, with all the content on it intact. But after another restart, it's the same story. I didn't find any relevant event in the event log, how can I find out why the drives are not attaching?

    Read the article

  • Why would a server not send a SYN/ACK packet in response to a SYN packet

    - by codemonkey
    Lately, we've become aware of a TCP connection issue that is mostly limited to mac and Linux users who browse our websites. From the user perspective, it presents itself as a really long connection time to our websites (11 seconds). We've managed to track down the technical signature of this problem, but can't figure out why it is happening or how to fix it. Basically, what is happening is that the client's machine is sending the SYN packet to establish the TCP connection and the web server receives it, but does not respond with the SYN/ACK packet. After the client has sent many SYN packets, the server finally responds with a SYN/ACK packet and everything is fine for the remainder of the connection. And, of course, the kicker to the problem: it is intermittent and does not happen all the time (though it does happen between 10-30% of the time) We are using Fedora 12 Linux as the OS and Nginx as the web server.

    Read the article

  • Windows Server 2012 Metro shortcut icons do not show for other users

    - by Andrew
    I have installed SQL Server 2012 and SharePoint 2013 on my Windows Server 2012 machine using a dedicated domain install account. When I log into the same machine with a user account, all the icons for these applications are missing! I can still access the applications by finding them in 'Program Files', however it is very annoying. (For example, I'm not exactly sure where the SharePoint PowerShell is located, and frankly I don't want to know either) In previous versions of Windows Server, the Icons always showed up in the Start Menu. Does anyone know how I can copy the shortcuts in one account to another?

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Server Performance

    - by Burt
    We have a dedicated server that we use to stage websites (our test server). The performance of the server has become really bad and we regularly have to restart it. When performance is poor I have checked task manager for the processes and memory but everything looks OK. We use a content management system and it is always when using the admin section of this CMS that we notice the performance degrade which makes me think it may have something to do with DB calls the CMS is making. Does this sound viable? Any other sggestions of how I can go about testing this? Thanks in advance...

    Read the article

  • SQL SERVER – Introduction to Function SIGN

    - by pinaldave
    Yesterday I received an email from a friend asking how do SIGN function works. Well SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. DECLARE @IntVal1 INT, @IntVal2 INT,@IntVal3 INT DECLARE @NumVal1 DECIMAL(4,2), @NumVal2 DECIMAL(4,2),@NumVal3 DECIMAL(4,2) SET @IntVal1 = 9; SET @IntVal2 = -9; SET @IntVal3 = 0; SET @NumVal1 = 9.0; SET @NumVal2 = -9.0; SET @NumVal3 = 0.0; SELECT SIGN(@IntVal1) IntVal1,SIGN(@IntVal2) IntVal2,SIGN(@IntVal3) IntVal3 SELECT SIGN(@NumVal1) NumVal1,SIGN(@NumVal2) NumVal2,SIGN(@NumVal2) NumVal3   The above function will give us following result set. You will notice that when there is positive value the function gives positive values and if the values are negative it will return you negative values. Also you will notice that if the data type is  INT the return value is INT and when the value passed to the function is Numeric the result also matches it. Not every datatype is compatible with this function.  Here is the quick look up of the return types. bigint -> bigint int/smallint/tinyint -> int money/smallmoney -> money numeric/decimal -> numeric/decimal everybody else -> float What will be the best example of the usage of this function that you will not have to use the CASE Statement. Here is example of CASE Statement usage and the same replaced with SIGN function. USE tempdb GO CREATE TABLE TestTable (Date1 SMALLDATETIME, Date2 SMALLDATETIME) INSERT INTO TestTable (Date1, Date2) SELECT '2012-06-22 16:15', '2012-06-20 16:15' UNION ALL SELECT '2012-06-24 16:15', '2012-06-22 16:15' UNION ALL SELECT '2012-06-22 16:15', '2012-06-22 16:15' GO -- Using Case Statement SELECT CASE WHEN DATEDIFF(d,Date1,Date2) > 0 THEN 1 WHEN DATEDIFF(d,Date1,Date2) < 0 THEN -1 ELSE 0 END AS Col FROM TestTable GO -- Using SIGN Function SELECT SIGN(DATEDIFF(d,Date1,Date2)) AS Col FROM TestTable GO DROP TABLE TestTable GO This was interesting blog post for me to write. Let me know your opinion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Server 2008 hangs up while booting

    - by Jim R
    Windows Server 2008 hangs up while booting after Windows update applied several updates. The server is a virtual instance on a Server 2008 Hyper-V host. Other virtual servers are fine, but have not been updated. The normal boot shows the horizontal barber poll forever. When I do a safe boot it also hangs up. With a "Please Wait..." after loading many '.sys' files. The last successfully loaded file listed is: '\Windows\system32\drivers\crcdisk.sys' That is the extent of what I have been able to determine.

    Read the article

  • Why won't my Windows 7 KMS key work on my Server 2008 KMS server?

    - by Ryan Bolger
    Our Microsoft volume licensing site was recently updated to include our Windows 7 and Server 2008 R2 KMS keys. We have an existing KMS server running on Server 2008 (not R2). In an attempt to be proactive about supporting the new OSes in our environment, I unregistered the old KMS key with slmgr.vbs and tried registering the new key. The registration failed with Error 0xC004F050. The description for that error was "The Software Licensing Service reported that the product key is invalid." What's wrong? I've checked and double checked that for typos against what is listed on the Volume Licensing website.

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • SQL SERVER – Script to Update a Specific Column in Entire Database

    - by Pinal Dave
    Last week, I have received a very interesting question and I find in email and I really liked the question as I had to play around with SQL Script for a while to come up with the answer he was looking for. Please read the question and I believe that all of us face this kind of situation. “Pinal, In our database we have recently introduced ModifiedDate column in all of the tables. Now onwards any update happens in the row, we are updating current date and time to that field. Now here is the issue, when we added that field we did not update it with a default value because we were not sure when we will go live with the system so we let it be NULL. Now modification to the application went live yesterday and we are now updating this field. Here is where I need your help. We need to update all the tables in our database where we have column created ModifiedDate and now want to update with current datetime. As our system is already live since yesterday there are several thousands of the rows which are already updated with real world value so we do not want to update those values. Essentially, in our entire database where ever there is a ModifiedDate column and if it is NULL we want to update that with current date time?  Do you have a script for it?” Honestly I did not have such a script. This is very specific required but I was able to come up with two different methods how he can use this method. Method 1 : Using INFORMATION_SCHEMA SELECT 'UPDATE ' + T.TABLE_SCHEMA + '.' + T.TABLE_NAME + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM INFORMATION_SCHEMA.TABLES T INNER JOIN INFORMATION_SCHEMA.COLUMNS C ON T.TABLE_NAME = C.TABLE_NAME AND c.COLUMN_NAME ='ModifiedDate' WHERE T.TABLE_TYPE = 'BASE TABLE' ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME; Method 2: Using DMV SELECT 'UPDATE ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name ='ModifiedDate' ORDER BY SCHEMA_NAME(t.schema_id), t.name; Above scripts will create an UPDATE script which will do the task which is asked. We can pretty much the update script to any other SELECT statement and retrieve any other data as well. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • new web site on windows 2008 server with IIS7 - does not work

    - by user22817
    Hi guys, I have a new domain: www.biografica.ro which was bought 3 months ago but never used still then. I've bought a server with Windows 2008 server instaWeb Server (IIS). I've added a new site in C:\inetpu\wwwroot directory and did the setting (assigned the default ip to www.biografica.ro host etc -i've did on IIS6 one year ago, so i think i know to set up it correctly)... The problem is that the default site created by IIS instalation is working, but mine is not. It is started but is says: This link appears to be broken in Chrome and "The webpage cannot be found" (in IE). Do you know guys what i;ve done wrong? As i know a domain takes time to propagate but i think locally it should work.. Please help...i've spent 3 hours and cannot find a way...:(

    Read the article

  • How can I permanently remove default root hints from a Server 2008 DNS server?

    - by TonyD
    My network exists in private address space and I am unable to perform DNS lookups against DNS servers on the internet directly (blocked by firewall). There are other networks that exist in the same private address space as my network. I need to be able to perform DNS lookups for devices in these networks as well. There are 2 main internal DNS servers in this private address space, but not on my netowrk. I can perform DNS lookups against both of these servers for devices internal to our address space and names on the internet. I would like to permanently remove the root hints from our Server 2008 R2 DNS server and replace them with these 2 internal DNS servers. I have removed them from the dnsmgmt console, the C:\Windows\System32\DNS\cache.dns file, and from the RootDNSServers folder under the System folder in ADUC. Even so, they continue to repopulate into the root hints tab in the server properties for DNS after roughly an hour. Does anyone know how to permanently remove these entries?

    Read the article

  • Migration of egroupware from one Ubuntu server to another

    - by Chris Schmidt
    I am new to server administration and have been attempting to migrate from one Ubuntu server to another for 4 days now. I am having a problem with migration of egroupware settings. Specifically, I need to find where knowledgebase saves the files on the local machine. I have the database set correctly and its using the previous settings. However, articles in knowledgebase are not showing images associated with them. I have tried everything to find the file on the old server that stores data files uploaded to the knowlegebase but I cannot. If there is another way to import these files, or if someone knows where they are saved, I would really appreciate the help. -Chris

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Sql Server 2008, Active Directory Groups, and Failed Logins

    - by Ryan Michela
    I keep getting a Login Failed error in my ASP.net application when connecting to my SQL Server 2008 database. I am trying to login with the user domain\foo. When I grant a database login (server and database level) for domain\foo, my application can connect. When I put domain\foo in a group called domain/goo and give domain\goo a database login, the user domain\foo cannot authenticate. This does not make any sense. Am I doing something wrong? domain\foo and domain\goo are configured identically. The only difference is that on is a user and one is a group containing a user. Adding active directory groups as users to SQL Server 2008 is supposed to work.

    Read the article

  • Developer Laptop with SQL Server 2008 can't login to SSIS when offsite

    - by wizlb
    When I bring my Windows XP (SP3) laptop home I can still login as my domain account because Windows caches the info necessary to authenticate me when the domain controller isn't around. However, when I try to connect to Integration Services from within SQL Server Management Studio, it generates SSPI context errors. The only way it works is if I connect to the office with VPN or if I'm at the office where the domain controller is. I have both SQL Server Agent and SQL Server Integration Services 10 running under local computer accounts. It seems that the only option to connect to Integration Services from within Management Studio is to use Window authentication. Is there any way to do this when I'm not connected to the office? Why don't these services use the cached info just like Windows Login? Thanks.

    Read the article

  • Visual Studio Agents 2012 on Server 2003 SP2

    - by Corith Malin
    I'm attempting to build out our Lab Manager with TFS 2012. On a virtual machine running Server 2003 SP2 32bit, I'm attempting to install the Visual Studio Agents 2012 and am running into an error: Setup Failed! Install cannot continue because some required components failed. Microsoft .NET Framework 4.5 Asia Looking into it, the install log is erroring when it's attempting to install dotNetFx45_Full_x86_x64.exe component. Looking at that install log, it is failing with: The .NET Framework 4.5 is not supported on this operating system. So, I see according to the Agents 2012 MSDN documentation, that Server 2003 SP2 is supported by Agents 2012. But I also see that according to the .NET 4.5 MSDN documentation, Server 2003 isn't supported. So how do I install Agents 2012 on 2003 SP2 as the documentation implies I can?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >