Search Results

Search found 93093 results on 3724 pages for 'server core'.

Page 76/3724 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • AS3 Core Lib and Adobe Air update frameowork conflicts

    - by Sny
    Hello i am working on an Adobe air html/ajax application in which in need as3Core lib and Air update framework. Update framework work only if I remove as3Corelib from my code. But they both are essential for my project. Btw i am using only JpegEncoder from as3Core lib. Thanks For reading :)

    Read the article

  • how to build and run core apps of ics like settings, camera etc on windows

    - by user1495186
    I have downloaded ics 4.0.3 source code, want to modify native settings, what i have to do is 1) add custom modifications to the settings 2) recompile native settings with added modifications 3) build the source code 4) generate a customized build to work on all android devices. How can I achieve the above thing? Every suggestion is appreciated. Thanks in advance. FYI: Using win7,4gb ram, intel i5 processor. Installed cygwin,git.

    Read the article

  • iPhone - Create not persistant entities in core data

    - by ncohen
    Hi everyone, I would like to use entity objects but not store them... I read that I could create them like this: myElement = (Element *)[NSEntityDescription insertNewObjectForEntityForName:@"Element" inManagedObjectContext:managedObjectContext]; And right after that remove them: [managedObjectContext deleteObject:myElement]; then I can use my elements: myElement.property1 = @"Hello"; This works pretty well even though I think this is probably not the most optimal way to do it... Then I try to use it in my UITableView... the problem is that the object get released after the initialization. My table becomes empty when I move it! Thanks

    Read the article

  • Implement delegates for Core Data or not

    - by Spanky
    What advantage is there to implementing the four delegate methods: (void)controllerWillChangeContent:(NSFetchedResultsController *)controller (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id )sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath (void)controllerDidChangeContent:(NSFetchedResultsController *)controller rather than implement: (void)controllerDidChangeContent:(NSFetchedResultsController *)controller Any help appreciated // :)

    Read the article

  • Core Data - insertNewObjectForEntityForName debug

    - by Snow Crash
    I'm trying to figure out why insertNewObjectForEntityForName is not working. I assume it's something to do with my data model but can't be sure. Xcode does not report any errors nor does it crash. All I get is the first log statement output to the console. NSLog(@"Get here..."); Task *task = (Task *)[NSEntityDescription insertNewObjectForEntityForName:@"Task" inManagedObjectContext:insertionContext]; NSLog(@"but never get here..."); Any suggestions as to how I can work out what the problem is?

    Read the article

  • Core Data not-reverse relationship subquery

    - by user561485
    Hi, I have the following entities in CoreData: Village - villageID Bookmark - (relation) village There are multiple villages with each an unique villageID. I have a entity Bookmark which only has a relation to a Village entity; it isn't possible to make a reverse relation. Now I would like to get the village entities where there exists a Bookmark relation. I've red something about subqueries, but I can't get it right for this situation. It must be something like: Village.villageID IN (Bookmark.village.villageID) It isn't possible to get first all the Bookmarks and then loop to get all the Villages, because of the design of the framework. Can this be done in CoreData (I presume the answer is "Yes, of course!") and how?

    Read the article

  • How is Core Data detecting the conflicts, actually?

    - by brainfrog
    Apple says about -detectConflictsForObject: If on the next invocation of save: object has been modified in its persistent store, the save fails. This allows optimistic locking for unchanged objects. Conflict detection is always performed on changed or deleted objects. So what does this mean? If I simply modify an managed object and then save the context, there is always a conflict detection happening? Does this conflict detection simply compare the timestamps of the "records" to see if the "new" data is actually "old"? Is that a conflict?

    Read the article

  • Using iPhone Core Data to many Relationship

    - by BLeB
    When I define a to many relationship between entities in Xcode and then generate the data class from the entity I get a header with the following methods defined: @interface PriceList (CoreDataGeneratedAccessors) - (void)addItemsObject:(PriceListItem *)value; - (void)removeItemsObject:(PriceListItem *)value; - (void)addItems:(NSSet *)value; - (void)removeItems:(NSSet *)value; @end When I attempt to call addItemsObject with the following code a doesNotRecognizeSelector exception is thrown. PriceListItem *item = [NSEntityDescription insertNewObjectForEntityForName:@"PriceListItem" inManagedObjectContext:managedObjectContext]; item.cat = [attributeDict valueForKey:@"c"]; item.sel = [attributeDict valueForKey:@"s"]; [self addItemsObject:item]; From what I have read I do not have to implement these methods and that they are generated at runtime. Any ideas?

    Read the article

  • Core Data-Linking one-to-many relationships

    - by Stelmate
    I have a one-to-many relationship where each department has many employees. When I create a new employee object I just link it to the parent department manually by setting the property to the instance of the department I have fetched from my fetch request. However, this seems to be improper because when I try to access the set of employees from the department by simply accessing the .employees property on my department object instance it returns a 0 count. Isn't the fault suppose to fire once I access a property? Am I linking my parent/child objects incorrectly?

    Read the article

  • AND NSPedicate on Core Data relationships

    - by jesse001
    I'm having trouble compounding NSPredicate with AND, although using OR works fine. Imagine 2 entities, Doctor and Patient. Doctors can have many patients and patients many doctors. I want to find doctors that, say, have both person1 and person2 as patients. I expected this to work but it returns none. NSPredicate *predicate = [NSPredicate predicateWithFormat:@"ANY patients matches 'person1&&person2'"]; If I change && to ||, I get all doctors that have person1 or person2 as I'd expect. Thanks in advance for your help.

    Read the article

  • Store image in core data and Retina Display ?

    - by shani
    Hi I have an app that has hundreds of words with 3/4 images for each word. I have 2 versions of each word one for iOS 3 and one for retina display. I wish to save the images as data and connect them to the appropriate word so it will be easy to pull them later. my question is - how do i get the suitable size ? its works great with the @2x wjen you get it from the app file system, but hoe does it supposed to work when i get it from data ? thanks shani

    Read the article

  • Core Data to-many relationship in code

    - by Jan Bezemer
    I have three entities: Session, User and Test. A session has 0-many users and a user can perform 0-6 tests. (I say 0 but in the real application always at least 1 is required, at least 1 user for a session and at least 1 test for a user. But I say 0 to express an empty start.) All entities have their own specific data attributes too. A user has a name, A session has a name, a test has six values to be filled in by the user, and so on. But my issue is with the relationships. How do I set multiple users and have them added to one session (same goes for multiple tests for one user). How do I show the content in a right way? How do I show a session that has multiple users and these users having completed multiple tests? Here's my code so far with regard to issue 1: Session *session = [NSEntityDescription insertNewObjectForEntityForName:@"Session" inManagedObjectContext:context]; session.name = @"Session 1"; User *users = [NSEntityDescription insertNewObjectForEntityForName:@"User" inManagedObjectContext:context]; users.age = [NSNumber numberWithInt:28]; users.session = session; //sessie.users = users; [sessie addUserObject:users]; With regard to issue 2: I can log the session, but I can't get the user(s) logged from a session. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Session" inManagedObjectContext:context]; [fetchRequest setEntity:entity]; NSArray *fetchedObjects = [context executeFetchRequest:fetchRequest error:&error]; for (Session *info in fetchedObjects) { NSLog(@"Name: %@", info.name); NSLog(@"Having problems with this: %@",info.user); //User *details = info.user; //NSLog(@"User: %@", details.age); }

    Read the article

  • Improving the efficiency of multiple concurrent Core Animation animations

    - by Alex
    I have a view in my app that is very similar to the month view in the built-in Calendar app. There's a subview that holds the individual cells (a custom UIView subclass that draws text into its layer), and when the user navigates to the next "month", I create the new cells and slide the view to show them. When the animation stops, I remove the old, hidden cells and set things up so it's ready to go for the next animation. This all works nicely. However, I'd like to animate the cells' text color, as in the Calendar app, so that the outgoing ones transition to a lighter color and the incoming ones transition to a darker color. The problems is that I can have as many as 70 cells, so doing individual animations is very slow -- between 5-10 fps on my iPhone 3GS. I'm trying to find a less computationally intense way of doing this. My reading of the Shark results is that the majority of the time is spent redrawing the text for each frame for each frame. This makes sense, since text rendering is hardly the cheapest operation. I've considered creating a second view -- one holding the "outgoing" state and one holding the "incoming" state and using a single opacity animation to gradually reveal the updated cells while both are sliding. I'm concerned that instead of having 70 cells, I'll have 140, which seems like a lot of views. So, is that too many views or would there be a better way of doing this?

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AD-Integrated DNS failure: "Access was Denied"

    - by goldPseudo
    I have a single Windows 2008 R2 server configured as a domain controller with Active Directory Domain Services and DNS Server. The DNS Server was recently uninstalled and reinstalled in an attempt to fix a (possibly unrelated) problem; the event log was previously flooded with errors (#4000, "The DNS Server was unable to open Active Directory...") which reinstalling did not fix. However, while before it was at least showing and resolving names from the local network (slowly), now it's showing nothing at all. (The original error started with a #4015 error "The DNS server has encountered a critical error from the Active Directory," followed by a long string of #4000 and a few #4004. This may have been caused when a new DNS name was recently added, but I can't be sure of the timing.) Attempting to manage the DNS through Administrative Tools > DNS brings up an error: The server SERVERNAME could not be contacted. The error was: Access was denied. Would you like to add it anyway? Selecting yes just puts a SERVERNAME item on the list, but with all the configuration options grayed out. I attempted editing my hosts file as per this post but to no avail. Running dcdiag, it does identify the home server properly, but fails right away testing connectivity with: Starting test: Connectivity The host blahblahblahyaddayaddayadda could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... SERVERNAME failed test Connectivity Adding the blahblahblahyaddayaddayadda address to hosts (pointing at 127.0.0.1), the connectivity test succeeded but it didn't seem to solve the fundamental problem (Access was denied) so I hashed it out again. Primary DNS server is properly pointing at 127.0.0.1 according to ipconfig /all. And the DNS server is forwarding requests to external addresses properly (if slowly), but the resolving of local network names is borked. The DNS database itself is small enough that I am (grudgingly) able to rebuild it if need be, but the DNS Server doesn't seem willing to let me work with (or around) it at all. (and yes before you ask there are no system backups available) Where do I go from here? As requested, my (slightly obfuscated) dcdiag output: Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = bulgogi * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Obfuscated\BULGOGI Starting test: Connectivity The host a-whole-lot-of-numbers._msdcs.obfuscated.address could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... BULGOGI failed test Connectivity Doing primary tests Testing server: Obfuscated\BULGOGI Skipping all tests, because server BULGOGI is not responding to directory service requests. Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : obfuscated Starting test: CheckSDRefDom ......................... obfuscated passed test CheckSDRefDom Starting test: CrossRefValidation ......................... obfuscated passed test CrossRefValidation Running enterprise tests on : obfuscated.address Starting test: LocatorCheck ......................... obfuscated.address passed test LocatorCheck Starting test: Intersite ......................... obfuscated.address passed test Intersite And my hosts file (minus the hashed lines for brevity): 127.0.0.1 localhost ::1 localhost And, for the sake of completion, here's selected chunks of my netstat -a -n output: TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49152 0.0.0.0:0 LISTENING TCP 0.0.0.0:49153 0.0.0.0:0 LISTENING TCP 0.0.0.0:49154 0.0.0.0:0 LISTENING TCP 0.0.0.0:49155 0.0.0.0:0 LISTENING TCP 0.0.0.0:49157 0.0.0.0:0 LISTENING TCP 0.0.0.0:49158 0.0.0.0:0 LISTENING TCP 0.0.0.0:49164 0.0.0.0:0 LISTENING TCP 0.0.0.0:49178 0.0.0.0:0 LISTENING TCP 0.0.0.0:49179 0.0.0.0:0 LISTENING TCP 0.0.0.0:50480 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:139 0.0.0.0:0 LISTENING TCP 192.168.12.127:445 192.168.12.50:51118 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.4:33579 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.100:1115 ESTABLISHED TCP 192.168.12.127:50784 192.168.12.50:49174 ESTABLISHED <snip ipv6> UDP 0.0.0.0:123 *:* UDP 0.0.0.0:500 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:4500 *:* UDP 0.0.0.0:5355 *:* UDP 0.0.0.0:59638 *:* <snip a few thousand lines> UDP 0.0.0.0:62140 *:* UDP 127.0.0.1:53 *:* UDP 127.0.0.1:49540 *:* UDP 127.0.0.1:49541 *:* UDP 127.0.0.1:53655 *:* UDP 127.0.0.1:54946 *:* UDP 127.0.0.1:58345 *:* UDP 127.0.0.1:63352 *:* UDP 127.0.0.1:63728 *:* UDP 127.0.0.1:63729 *:* UDP 127.0.0.1:64215 *:* UDP 127.0.0.1:64646 *:* UDP 192.168.12.127:53 *:* UDP 192.168.12.127:67 *:* UDP 192.168.12.127:68 *:* UDP 192.168.12.127:88 *:* UDP 192.168.12.127:137 *:* UDP 192.168.12.127:138 *:* UDP 192.168.12.127:389 *:* UDP 192.168.12.127:464 *:* UDP 192.168.12.127:2535 *:* <snip ipv6 again>

    Read the article

  • SQL SERVER – What is AdventureWorks?

    - by pinaldave
    NOTE: If you know the answer of this question, then I request you to stop reading this post right now. Please do not leave comment about this blog post not being useful to you, if you knew the answer. Few days ago, I received DM asking What is an AdventureWorks database and why in all the examples I use that instead of any other database (e.g. Pubs or  Northwind)? As matter of fact, when I went back to my question list, which I have yet not answered, there were a few more variations of this same question. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from http://codeplex.com site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. Here are some quick links: AdventureWorks SQL Server 2008 SR4 AdventureWorks 2008R2 November CTP AdventureWorks for SQL Azure (December CTP) AventureWorks for SQL Server 2005 SP2A SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2005 – Detail Tutorial I have previously written few other articles on the same subject; you can find them easily here: [email protected] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem with setup VPN on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Free eBook Download – Introducing Microsoft SQL Server 2008 R2

    - by pinaldave
    Microsoft Press has published FREE eBook on the most awaiting release of SQL Server 2008 R2. The book is written by Ross Mistry and Stacia Misner. Ross is my personal friend and one of the most active book writer in SQL Server Domain. When I see his name on any book, I am sure that it will be high quality and easy to read book. The details about the book is here: Introducing Microsoft SQL Server 2008 R2, by Ross Mistry and Stacia Misner The book contains 10 chapters and 216 pages. PART I   Database Administration CHAPTER 1   SQL Server 2008 R2 Editions and Enhancements CHAPTER 2   Multi-Server Administration CHAPTER 3   Data-Tier Applications CHAPTER 4   High Availability and Virtualization Enhancements CHAPTER 5   Consolidation and Monitoring PART II   Business Intelligence Development CHAPTER 6   Scalable Data Warehousing CHAPTER 7   Master Data Services CHAPTER 8   Complex Event Processing with StreamInsight CHAPTER 9   Reporting Services Enhancements CHAPTER 10   Self-Service Analysis with PowerPivot More detail about the book is listed here. You can download the ebook in XPS format here and in PDF format here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Windows Server 2008 R2 + IIS 7.5 + ASP.NET 4.0 = HTTP Error 500.0

    - by Dave
    I am having an impossible time getting asp.net 4.0 to work in any fashion at all. In fact, I completely wiped my server, reinstalled with Server 2008 R2 Standard (running on a VMWare ESXi box, not that it should matter), and cannot even get a test .aspx page to work. Here is exactly what I did: Installed 2008 R2 Standard Activated windows and enabled Remote Desktop Installed the Web Server Role with the necessary role services(common http, asp.net, logging, tracing, management service and FTP) Enabled the management service Installed .Net Framework 4.0 via web executable Added FTP publishing to the default web site Switched default web site application pool to asp.net 4.0 (integrated) Added a 'test.aspx' file to the inetpub\wwwroot folder (contents below) Opened a browser to http://localhost/test.aspx and received a 500.0 error (also below) What am I missing? I haven't touched IIS in a while (3+ years), so it could be something stupid/trvial. Please point it out, call me a noob; my ego can take it. Thanks, Dave test.aspx <% @Page language="C# %> <html> <head> <title>Test.aspx</title> </head> <body> <asp:label runat="server" text="This is an asp.net 4.0 label" /> </body> </html> Error page: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler PageHandlerFactory-Integrated-4.0 Error Code 0x80070002 Requested URL http://localhost:80/test.aspx Physical Path C:\inetpub\wwwroot\test.aspx Logon Method Not yet determined Logon User Not yet determined Trace: And in my trace file I get: 96. view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 97. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) The application error log shows: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event>

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >