Search Results

Search found 98684 results on 3948 pages for 'sql server planet'.

Page 450/3948 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • No Outbound Internet on Windows Home Server

    - by Kyle B.
    Could someone provide some steps for me to check my internet connection on my Windows Home Server? It seems to have intermittent connectivity issues and I am unsure of how to diagnose the problem because it is a headless (no monitor, no keyboard) machine so the only way to get to the device is via remote desktop (which works fine). When connected to the machine, it doesn't pull up any microsoft.com sites and some other sites it does pull up (i.e. gmail.com) and some it doesn't (stackoverflow.com). To make matters more complicated, it has worked intermittently in the past for reasons unknown. Are there tools I can use to properly diagnose the reason for the connection failure? I can ping 127.0.0.1 just fine, I have internet working on my other router-connected machines, so I'm not sure why this one would fail. Any suggestions would be much appreciated and up-voted :) ** edit - thanks for suggestions guys, I'm going to try these tonight and will update my post. ** edit #2 - I hoping this is a more permanant fix, but I have both changed my port on the router as well as restarted the router at the same time. The internet (for the moment) appears to be working. I will be sure to try everything we have discussed should this problem persist. Thanks, Kyle

    Read the article

  • windows Home Server backup error

    - by domen
    I've finally built my WHS, but some other problems have showed up. I've googled, binged and searched SU with no success. The problem is following: at the moment I've got a Win7 laptop and Win7 PC, which should be backed up by the WHS. Laptop is backed up just fine with no issues, but when I try to manually backup the PC, after "backup is starting" message, when backup service should be monitoring changes on partitions, PC gets disconnected from the home network and thus, the backup process is stuck. Disabling/enabling network adapter gets PC back on the network. The only thing I've tried was reinstalling connector software. no success. Also, I've downloaded connector troubleshooter and only thing it says is "DHCP server was not found". I'm not good with networks, so I couldn't figure out what could that indicate (all computers in the network are assigned static IPs). Any ideas what the problem can be? I can provide any additional information, I'm just not sure what may be helpful right now. Thanks.

    Read the article

  • Cross-forest universal groups on Windows Server?

    - by DotGeorge
    I would like to create a Universal Group whose members are a mix of cross-forests users and groups. In the following example, two forests are mentioned (US and UK) and two domains in each forest (GeneralStaff and Java): For example, the universalDevelopers group may comprise of members from UK.Java.Developers and US.Java.Developers. Then, for example, there may be a group of universalSales which contains the users UK.GeneralStaff.John and US.GeneralStaff.Dave. In UK forest at the minute, I can freely add members and groups from the UK. But there is no way to add members from the US forest, despite having a two-way trust in place... e.g. I can login with US members into UK and vice-versa. A further complication is that, with a Universal group in the UK (which contains three domains), I can only add two of the three. It can't see the third. Could people please provide some thoughts on why cross-forest groups can't be created and ways of 'seeing' all domains within a forest. EDIT: This is on a combination of Windows 2003 and 2008 server. Answers can be regarding either. Thanks!

    Read the article

  • linux command prompt ftp to ftp server running on windows

    - by Vass
    Hi, I am using on Windows Vista, Filezilla server. I have it set up to be accessed via outside IPs and when I use a client on the IP I have it connects normally using Filezilla client. On the same machine I have Ubuntu running in a virtual box and when using filezilla client in there it works fine. Now I want to try the command prompt. So I do the ftp xxx.xxx.xx.xx I enter the name and password and i get the ftp command prompt, but the commands are not working properly. when trying "ls" or "cd" these commands do not work. "cd" tells me that the current directory is "/" root, but this does not make sense in the windows operating system. Now the filezilla client is taking the user in the application window directly to the root folder of the permitted filespace granted to that user. How can the same be done from the command prompt, if there is a way? It is as if the command prompt takes me to the root which does not exist or even have correct permissions to move in. Is there any way to be taken to the correct directory directly, or move there especially when the slashes are the wrong way around etc? Best,

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • Audio server with best API?

    - by Wintermute
    I'm a web dev, working in a small studio with a couple of other devs and some crayon-munchers (or, "designers"). Like all the best and trendiest creative studios, we have tunes. Our tunes consists of a set of speakers that whoever wants to can plug into their machine, and DJ their little socks off via iTunes, Spotify, VLC or whatever their music player of choice happens to be. Obviously, this lacks finesse. What we WANT is this: a single, dedicated machine running some sort of audio player (ideally Win-based, but a Linux flavour isn't impossible), that exposes an API. We (ie: me and the other devs) want to write a web-based client onto it, that'll let us remotely do all sorts of funky stuff like generating on-the-fly genre-based playlists, and voting for tracks, and making tea. My question - and please forgive me if this isn't the place for such a question, I was going to ask on Stackoverflow but that didn't seem right either - is this: what's the best player to start with? What can do all of this? I know VLC can function as a streaming server, but know nothing of any API it may have. I'd rather chop my pinky off than use iTunes, but if it does what we want, then... Anyhow, thanks for reading. All comments and suggestions gratefully received.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • MSSQL 2008 login failed for windows authentication

    - by Force Flow
    I'm running Microsoft SQL 2008 on a Windows 2008 Server. The MSSQL server server authentication is set to SQL Server and Windows Authentication mode. I have created an active directory security group "xyz app users". I have added a normal user (without any active directory admin privledges) and a user with domain admin privledges to the "xyz app users" group. I have added the group to the MSSQL management console as a login user. This group is a member of the public server role and is mapped to two databases. On a workstation, when the normal user is logged in, I configure a DSN ODBC connection, and I'm able to successfully create the DSN and test the SQL connection. However, when I'm logged in as the user with domain admin privledges, when I attempt to configure the DSN ODBC connection, I can't get past the login ID configuration screen. If I select "windows authentication" and click "next", I get an error: Connection failed: SQLState: '28000' SQL Server Error: 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'mydomain\myuser' On the server's application event logs, this error appears: Login failed for user 'mydomain\myuser'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 172.x.x.x] And in MSSQL's event logs: Error: 18456, Severity: 14, State: 11 Solutions that I've seen so far do not seem to fit this situation (some solutions I've seen are only applicable when the BUILDIN\Administrator is being used locally on the server, which is not the case here).

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • C socket programming: client send() but server select() doesn't see it

    - by Fantastic Fourier
    Hey all, I have a server and a client running on two different machines where the client send()s but the server doesn't seem to receive the message. The server employs select() to monitor sockets for any incoming connections/messages. I can see that when the server accepts a new connection, it updates the fd_set array but always returns 0 despite the client send() messages. The connection is TCP and the machines are separated by like one router so dropping packets are highly unlikely. I have a feeling that it's not select() but perhaps send()/sendto() from client that may be the problem but I'm not sure how to go about localizing the problem area. while(1) { readset = info->read_set; ready = select(info->max_fd+1, &readset, NULL, NULL, &timeout); } above is the server side code where the server has a thread that runs select() indefinitely. rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); sleep(3); char * somemsg = "is this working yet?\0"; rv = send(sockfd, somemsg, sizeof(somemsg), NULL); if (rv < 0) printf("MAIN: ERROR send() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); rv = sendto(sockfd, somemsg, sizeof(somemsg), NULL, &server_address, sizeof(server_address)); if (rv < 0) printf("MAIN: ERROR sendto() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); and this is the client side where it connects and sends messages and returns connected MAIN: rv is 4 MAIN: rv is 4 any comments or insightful insights are appreciated.

    Read the article

  • C++ socket concurrent server

    - by gregpi
    Hi all; Im writing a concurrent server that's supposed to have a communication channel and a data channel. The client initially connect to the communication channel to authenticate, upon successful authentication, the client is then connected to the data channel to access data. My program is already doing that, and im using threads.My only issue is that if I try to connect another client, I get a "cannot bind : address already in use" error. I have it this way: PART A Client connect to port 4567 (and enter his login info). A thread spawn to handle the client(repeated for each client that connects). In the thread created, I have a function(let's call it FUNC_A) that checks the client's login info(dont worry about how the check is done), if successful, the thread starts the data server(listening on 8976) then sends an OK to the client, once received the client attempts to connect to the data server. PART B Once a client connect to the data server, from inside FUNC_A the client is accepted and another thread is spawn to handle the client's connection to the data server.(hopefully everything is clear). Now, all that is working fine. However, if I try to connect with second client when it gets to PART B I get a "cannot bind error: address already in use". I've tried so many different ways, I've even tried spawning a thread to start the data server and accept the client and then start another thread to handle that connection. still no luck. Please give me a suggestion as to what I'm doing wrong, how do I go about doing this or what's the best way to implement it. Thank you

    Read the article

  • Deploying a Rails app on an Ubuntu server using Git

    - by NudeCanalTroll
    I'm completely new to Linux, but today I find myself setting up a server (Ubuntu 10.04 LTS lucid) from scratch to host a Rails application. Anyway, I managed to get a Rails app up and running on the server itself, but I had to scrap that because I want to use Git. So I setup a git repository on the server, then pushed all the code from my local machine to the repository. Buuuut, of course Git doesn't actually store the files themselves in the repository -- all the code for my Rails app is now only on my local machine. How am I supposed to tell the server to host that? Right now my solution is to have the server use git to pull the code from its own repository. That's the code I'll host for all the world to see. In order to update the code, I guess I'll have to do something like this: Update the code on my local machine. Do some git adds, git commits, and a git push. On the server, do a git pull to update the code. So my question is, am I doing this the right way? enter code here

    Read the article

  • How can i initialise a server on startup?

    - by djerry
    Hey all, I need to make some connections on startup of a server. I'm using the wcf technology for this client-server application. The problem is that the constructor of the server isn't called at any time, so for the moment, i initialize the connections when the first client makes a connection. But this generates problems in a further part. This is my server setup: private static ServiceHost _svc; static void Main(string[] args) { NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://localhost:8000"); _svc = new ServiceHost(typeof(MonitoringSystemService), address); publishMetaData(_svc, "http://localhost:8001"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); Console.WriteLine("Listener service gestart op net.tcp://localhost:8000/Monitoring"); Console.ReadLine(); } private static void publishMetaData(ServiceHost svc, string sEndpointAddress) { ServiceMetadataBehavior smb = svc.Description.Behaviors.Find<ServiceMetadataBehavior>(); if (smb != null) { smb.HttpGetEnabled = true; smb.HttpGetUrl = new Uri(sEndpointAddress); } else { smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.HttpGetUrl = new Uri(sEndpointAddress); svc.Description.Behaviors.Add(smb); } } How can i start the server without waiting for a client to logon so i can initialize it. Thanks in advance.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Exclamation Marks in a Query SQL

    - by mikeabyss
    I'm reading over this query, and I came upon a line where I don't understand heres the line [FETT List]![FETT Search] FETT List is a table And FETT Search is a column in FETT List Can someone explain what the exclamation mark means? Thanks

    Read the article

  • Mock a Linq to Sql EntityRef using Moq?

    - by Jeremy Holt
    My datacontext contains a table named Userlog with a one to many relationship with table UserProfile. In class UserLog public UserProfile UserProfile { get {return this._UserProfile.Entity;} } In class UserProfile public EntitySet<UserLog> UserLogs { get{return this._UserLogs;} } { set {this._UserLogs.Assign(value); } How would I go about mocking (using Moq) UserLog.UserProfile without changing the autogenerated code? What I would like to do is something like: var mockUserLog = new Mock<UserLog>(); mockUserLog.Setup(c=>c.UserProfile.FirstName).Returns("Fred"); etc I can do this if I go into the designer.cs and make FirstName and UserProfile virtual, however I would like to do this in the partial class. Any ideas? Thanks Jeremy

    Read the article

  • What does error 0xC02020C4 mean in SSIS?

    I get this error with this description. Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "OLE DB Source" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.

    Read the article

  • which performance counters mainly matter for windows server performance?

    - by Karl Cassar
    We have a website which is sometimes performing slowly, and / or completely hangs. I have setted up temporarily the default system performance data collector in Performance Monitor, to see if this can shed some light. However, the default Data Collector set collects a huge amount of counters, as well as generates huge logs files. Just 8 hours of data resulted in 4GB of data. Which performance counters matter the most, when judging server load? Also, is it a performance concern if one leaves such data-collectors running indefinitely? Obviously, I will not know when the server will experience slow performance, so I need the logs there so that I can check them out. Any other specific guidelines on monitoring server performance would be greatly appreciated. OS is a Windows Server 2008 R2 (Web Edition).

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >