Search Results

Search found 105727 results on 4230 pages for 'oracle user tips'.

Page 351/4230 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • ???????: ????? ??????????? Oracle DB 11g R1 ? 2 ??? ????????????? - 29 ?????, IMC

    - by [email protected]
    ??????? ??? ?????? ????????????? ?????? 2 ???? ???????? ? ??? ????. ??? ???? ???????????? ??-??????? ???????????? ? ?????? ????????:??? ???????????? ???????????? ????????????? ? ?????? ???? ?? ???????, ?????????? ???????? ??? ? ???????? ???????????? ??????????? ?????????? ? ???????????? ??????????? ??????????????? ???????, ????????? ? ???????? ??? ???? ??????????, ???????? ??? ? ?????????? ??????? ????????????????? ?????????????? ???????? ?????????????? ??????? ? ????????? ??????, ?????- ????? ??? ???? ??????????????????, ?????????? ? ??????????????? ??? ????????? ??????????? ??????- ????? ? ??????????????? ?????????? ???????? ? ??????????? ?????? ?????? ????????????????? ????????Oracle Database 11g ???????? ????????? ??? ????????, ???????? ??????? ????? ????? ??????? ? ??????????????????????????? ? ??? ?? ??????? ? ??????????????? ????? ??? ????????-?????????????!

    Read the article

  • User/Group Policies in Windows 2000 domain controller

    - by Chris
    In Server 2000 active directory, I have 5 groups of users and every user has different policies. The problem is that a different desktop loads for only one specific user no matter what changes I make in administrative templates. If I copy this user profile and paste it into another group with a different name, windows workaround loads as it should, but some policies are not applied. Does anybody know a way to solve this problem instead of creating a new group and user from scratch?

    Read the article

  • Allow users to ssh to specific user through ldap and stored public keys

    - by iElectric
    I recently setup gitolite, where users access git repository with "gitolite" user through ssh. Now I would like to integrate that into LDAP. Each user has pubkey in LDAP and if he has "git" objectClass, he would be able to access "gitolite" user through ssh. I know it's possible to store public keys in LDAP, I'm not sure if it possible to allow authentication in "gitosis" account based on objectClass. EDIT: To clarify, with objectClass git, user "foobar" would be able to login as "gitolite" through ssh

    Read the article

  • Change default root user with a new one on Ubuntu Server

    - by MultiformeIngegno
    I have a VPS, until now I used the default root user for SSH access and everything. For security reasons I'd like to use a different user for root, terminal access and sudo operations. So I created another user, gave him the sudo and every other perm. The problem is that all the system files belongs to root. What happens if I set PermitRootLogin No for root? Those files wouldn't be editable by the new root user!

    Read the article

  • Default user profile getting filled with temp ie files

    - by Christoffer
    I have several Windows Server 2003 Terminal servers to service our domain users. Sometimes the Default User profile (C:\Documents and Settings\Default User) gets filled with tempoary internet files, so when a new user logs in, he gets a 1GB profile (instead of 20MB) that fills up the server fast. Why is it the default user profile gets filled up with files like that, by who and how do i prevent this?

    Read the article

  • Postfix server to receive emails for anyonymous user

    - by sachitad
    I have a postfix server configured with imap. Only the recipient with the user account in the system is accepted. For example: rcpt to: test@localhost will yield the following error: 550 5.1.1 <test@localhost>: Recipient address rejected: User unknown in local recipient table What I want to achieve is setup virtual maps which accepts email to all the users (even if the user doesn't accept in the system) then forward all those emails to a specific user mailbox. Is something like this, possible?

    Read the article

  • Can Gitosis enforce correct user name/email?

    - by koumes21
    Gitosis is able to authenticate users based on public/private key pair. It is able to find out which user is currently committing. However, the user name and email is taken from the client's Git configuration ('git config user.name' etc.), which can be set to arbitrary values. Is there any way to associate user names and emails with their public keys and then make Gitosis uses these names and emails as the name and email of the committer?

    Read the article

  • ec2-user password for running sudo -H -u

    - by bool.dev
    I have to run this command to initialize gitosis: sudo -H -u git gitosis-init < /home/ec2-user/id_rsa.pub But that asks me for a password for ec2-user: $ sudo -H -u git gitosis-init < id_rsa.pub [sudo] password for ec2-user: I do not have a password as i use the default .pem key file to login. I know i can probably login as the git user and do this, but is there any other way? Update: Using Linux AMI 12.09 (micro-instance), in region us-east-1 (N. Virginia)

    Read the article

  • How to create a local user with the same name as a likewise domain user?

    - by JamesRyan
    I need to create a local user with the same name as a domain user under CentOS with Likewise installed. When I use Useradd it says user exists, because a domain user with the same name exists. It is a service account for backup and does not work using a domain account. On machines where the local account was added before likewise was installed it works fine. Is there a way to temporarily disable this check?

    Read the article

  • how can I parse and group/do stats on user agent strings?

    - by user151841
    I have a database that has the various user-agent strings of visitors to our site. I'd like to do a 'survey' of them to see what browsers our users are using, so that I can know what features I can use in future development. Is there a tool to parse and run statistics on user-agent strings, or a bunch of strings like this? Ideally, I'd like to see a hierarchical grouping of the stats. For instance: Opera/9.80 (Windows Mobile; WCE; Opera Mobi/WMD-50301; U; en) Presto/2.4.13 Version/10.00 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.14912/1280; U; en) Presto/2.2.0 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.13918/812; U; en) Presto/2.2.0 Opera/9.64 (Macintosh; Intel Mac OS X; U; en) Presto/2.1.1 Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.13918/786; U; en) Presto/2.2.0 Opera/9.60 (J2ME/MIDP; Opera Mini/4.0.10992/432; U; en) Presto/2.2.0 I'd like to see 6 entries for Opera, broken down into 3 for 9.80, 1 for 9.64 and 2 for 9.60, and so forth for all browsers. Other dimensions, such as OS, would cross the boundaries of the browser version hierarchy, but it might be nice to see also.

    Read the article

  • Is it possible to have WAMP run httpd.exe as user [myself] instead of local SYSTEM?

    - by Olivier H
    Hello! I run a django application over apache with mod_wsgi, using WAMP. A certain URL allows me to stream the content of image files, the paths of which are stored in database. The files can be located whether on local machine or under network drive (\my\network\folder). With the development server (manage.py runserver), I have no trouble at all reading and streaming the files. With WAMP, and with network drive files, I get a IOError : obviously because the httpd instance does not have read permission on said drive. In the task manager, I see that httpd.exe is run by SYSTEM. I would like to tell WAMP to run the server as [myself] as I have read and write permissions on the shared folder. (eventually, the production server should be run by a 'www-admin' user having the permissions) Mapping the network shared folder on a drive letter (Z: for instance) does not solve this at all. The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour. I've also regedited : I tried to duplicate the HKLM[...]\wampapache registry key under HK_CURRENT_USER\ and rename the original key, but then the new key does not seem to be found when I cmd this > httpd.exe -n wampapache -k start or when I run WAMP. I've run out of ideas :) Has anybody ever had the same issue?

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • jboss 5.0 data source configuration in ear file. How can I run oracle 10g and 11g on the same serve

    - by Joe
    Currently my setup is: in my ear META-INF/jboss-app.xml <jboss-app> <service>datasource-ds.xml</service> </module> and datasource-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/mydeployment</jndi-name> <connection-url>jdbc:oracle:thin:@eir:myport:mydbname</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>myuser</user-name> <password>mypassword</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources> and it works when ojdbc5.jar is in my servername/lib How can I config my oracle driver information in my .ear file so that I can have two different ear deployments, one using oracle 10g and one using oracle 11g?

    Read the article

  • Convert byte array from Oracle RAW to System.Guid?

    - by Cory McCarty
    My app interacts with both Oracle and SQL Server databases using a custom data access layer written in ADO.NET using DataReaders. Right now I'm having a problem with the conversion between GUIDs (which we use for primary keys) and the Oracle RAW datatype. Inserts into oracle are fine (I just use the ToByteArray() method on System.Guid). The problem is converting back to System.Guid when I load records from the database. Currently, I'm using the byte array I get from ADO.NET to pass into the constructor for System.Guid. This appears to be working, but the Guids that appear in the database do not correspond to the Guids I'm generating in this manner. I can't change the database schema or the query (since it's reused for SQL Server). I need code to convert the byte array from Oracle into the correct Guid.

    Read the article

  • nhibernate says 'mapping exception was unhandled' no persister for: MyNH.Domain.User

    - by mrblah
    Hi, I am using nHibernate and fluent. I created a User.cs: public class User { public virtual int Id { get; set; } public virtual string Username { get; set; } public virtual string Password { get; set; } public virtual string Email { get; set; } public virtual DateTime DateCreated { get; set; } public virtual DateTime DateModified { get; set; } } Then in my mappinds folder: public class UserMapping : ClassMap<User> { public UserMapping() { WithTable("ay_users"); Not.LazyLoad(); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Username).Not.Nullable().WithLengthOf(256); Map(x => x.Password).Not.Nullable().WithLengthOf(256); Map(x => x.Email).Not.Nullable().WithLengthOf(100); Map(x => x.DateCreated).Not.Nullable(); Map(x => x.DateModified).Not.Nullable(); } } Using the repository pattern for the nhibernate blog: public class UserRepository : Repository<User> { } public class Repository<T> : IRepository<T> { public ISession Session { get { return SessionProvider.GetSession(); } } public T GetById(int id) { return Session.Get<T>(id); } public ICollection<T> FindAll() { return Session.CreateCriteria(typeof(T)).List<T>(); } public void Add(T product) { Session.Save(product); } public void Remove(T product) { Session.Delete(product); } } public interface IRepository<T> { T GetById(int id); ICollection<T> FindAll(); void Add(T entity); void Remove(T entity); } public class SessionProvider { private static Configuration configuration; private static ISessionFactory sessionFactory; public static Configuration Configuration { get { if (configuration == null) { configuration = new Configuration(); configuration.Configure(); configuration.AddAssembly(typeof(User).Assembly); } return configuration; } } public static ISessionFactory SessionFactory { get { if (sessionFactory == null) sessionFactory = Configuration.BuildSessionFactory(); return sessionFactory; } } private SessionProvider() { } public static ISession GetSession() { return SessionFactory.OpenSession(); } } My config: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">Server=.\SqlExpress;Initial Catalog=TestNH;User Id=dev;Password=123</property> <property name="show_sql">true</property> </session-factory> </hibernate-configuration> I created a console application to test the output: static void Main(string[] args) { Console.WriteLine("starting..."); UserRepository users = new UserRepository(); User user = users.GetById(1); Console.WriteLine("user is null: " + (null == user)); if(null != user) Console.WriteLine("User: " + user.Username); Console.WriteLine("ending..."); Console.ReadLine(); } Error: nhibernate says 'mapping exception was unhandled' no persister for: MyNH.Domain.User What could be the issue, I did do the mapping?

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

  • Refactoring a custom User model to user UserProfile: Should I create a custom UserManager or add use

    - by BryanWheelock
    I have been refactoring an app that had customized the standard User model from django.contrib.auth.models by creating a UserProfile and defining it with AUTH_PROFILE_MODULE. The problem is the attributes in UserProfile are used throughout the project to determine the User sees. I had been creating tests and putting in this type of statement repeatedly: user = User.objects.get(pk=1) user_profile = user.get_profile() if user_profile.karma > 10: do_some_stuff() This is tedious and I'm now wondering if I'm violating the DRY principle. Would it make more sense to create a custom UserManager that automatically loads the UserProfile data when the user is requested. I could even iterate over the UserProfile attributes and append them to the User model. This would save me having to update all the references to the custom model attributes that litter the code. Of course, I'd have to reverse to process for to allow the User and UserProfile models to be updated correctly. Which approach is more Django-esque?

    Read the article

  • Django User model, adding function

    - by Hellnar
    Hello, I want to add a new function to the default User model of Django for retrieveing a related list of Model type. Such Foo model: class Foo(models.Model): owner = models.ForeignKey(User, related_name="owner") likes = models.ForeignKey(User, related_name="likes") ........ #at some view user = request.user foos= user.get_related_foo_models() Hwo can this be achieved ?

    Read the article

  • (Oracle Performance) Will a query based on a view limit the view using the where clause?

    - by BestPractices
    In Oracle (10g), when I use a View (not Materialized View), does Oracle take into account the where clause when it executes the view? Let's say I have: MY_VIEW = SELECT * FROM PERSON P, ORDERS O WHERE P.P_ID = O.P_ID And I then execute the following: SELECT * FROM MY_VIEW WHERE MY_VIEW.P_ID = '1234' When this executes, does oracle first execute the query for the view and THEN filter it based on my where clause (where MY_VIEW.P_ID = '1234') or does it do this filtering as part of the execution of the view? If it does not do the latter, and P_ID had an index, would I also lose out on the indexing capability since Oracle would be executing my query against the view which doesn't have the index rather than the base table which has the index?

    Read the article

  • CDI SessionScoped Bean instance remains unchanged when login with different user

    - by Jason Yang
    I've been looking for the workaround of this problem for rather plenty of time and no result, so I ask question here. Simply speaking, I'm using a CDI SessionScoped Bean User in my project to manage user information and display them on jsf pages. Also container-managed j_security_check is used to resolve authentication issue. Everything is fine if first logout with session.invalidate() and then login in the same browser tab with a different user. But when I tried to directly login (through login.jsf) with a new user without logout beforehand, I found the user information remaining unchanged. I debugged and found the User bean, as well as the HttpSession instance, always remaining the same if login with different users in the same browser, as long as session.invalidate() not invoked. But oddly, the session id did modified, and I've both checked in Java code and Firebug. org.apache.catalina.session.StandardSessionFacade@5d7b4092 StandardSession[c69a71d19f369d08b5dddbea2ef0] attrName = org.jboss.weld.context.conversation.ConversationIdGenerator : attrValue=org.jboss.weld.context.conversation.ConversationIdGenerator@583c9dd8 attrName = org.jboss.weld.context.ConversationContext.conversations : attrValue = {} attrName = org.jboss.weld.context.http.HttpSessionContext#org.jboss.weld.bean-Discipline-ManagedBean-class com.netease.qa.discipline.profile.User : attrValue = Bean: Managed Bean [class com.netease.qa.discipline.profile.User] with qualifiers [@Any @Default @Named]; Instance: com.netease.qa.discipline.profile.User@c497c7c; CreationalContext: org.jboss.weld.context.CreationalContextImpl@739efd29 attrName = javax.faces.request.charset : attrValue = UTF-8 org.apache.catalina.session.StandardSessionFacade@5d7b4092 StandardSession[c6ab4b0c51ee0a649ef696faef75] attrName = org.jboss.weld.context.conversation.ConversationIdGenerator : attrValue = org.jboss.weld.context.conversation.ConversationIdGenerator@583c9dd8 attrName = com.sun.faces.renderkit.ServerSideStateHelper.LogicalViewMap : attrValue = {-4968076393130137442={-7694826198761889564=[Ljava.lang.Object;@43ff5d6c}} attrName = org.jboss.weld.context.ConversationContext.conversations : attrValue = {} attrName = org.jboss.weld.context.http.HttpSessionContext#org.jboss.weld.bean-Discipline-ManagedBean-class com.netease.qa.discipline.profile.User : attrValue = Bean: Managed Bean [class com.netease.qa.discipline.profile.User] with qualifiers [@Any @Default @Named]; Instance: com.netease.qa.discipline.profile.User@c497c7c; CreationalContext: org.jboss.weld.context.CreationalContextImpl@739efd29 attrName = javax.faces.request.charset : attrValue = UTF-8 Above block contains two successive logins and their Session info. We can see that the instance(1st row) the same while session id(2nd row) different. Seems that session object is reused to contain different session id and CDI framework manages session bean life cycle in accordance with the session object only(?). I'm wondering whether there could be only one server-side session object within the same browser unless invalidated? Since I'm adopting j_security_check I fancy intercepting it and invalidating old session is not so easy. So is it possible to accomplish the goal without altering the CDI+JSF+j_security_check design that one can relogin with different account in the same or different tab within the same browser? Really look forward for your response. More info: Glassfish v3.1 is my appserver.

    Read the article

  • User Profile objects are empty, even user logged-in properly?

    - by Ahmed
    I use asp:Login control, user can login properly, but while checking user Profile information within LoggedIn event of Login control, all of the fields in the Profile objects are empty. Also, User.Identity.IsAuthenticated always returns false. But, all of these issue solved while navigating to another page. Why User.Identity.IsAuthenticated returns false, even user logged-in properly? And, is there any way to get user's profile information within LoggedIn event of Login control?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >