Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 523/1159 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • SQL SERVER – SQL Server Performance: Indexing Basics – SQL in Sixty Seconds #006 – Video

    - by pinaldave
    A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analyzing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behavior and designing our applications appropriately will make sure the application is performed to its highest potential. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. More on Indexes: SQL Index SQL Performance I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Here is the interview of Vinod Kumar by myself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • Windows Server 2008 Standard vs. Web

    - by Andreas
    I'm currently comparing Windows Server 2008 versions to see what to use. What i found is this, that might affect me: RAM: 32GB. (the same) Sockets: 4 (the same) Remote Desktop: 2 (the same) IIS: true (the same) Application Server: Only standard. I will run my server as a single CPU (4 core) 8GB RAM, 2x raid1 web-server running: IIS Asp.net .Net 4 Third part mail server. (Only for sending mail from my web-application) SQL Server Express (My data is not more then 10 GB) Some minor applications for import and export of data. I might use external load balancer if I install a second machine in the future. My question is if you see any reason for me to go for standard that is 4 x price compared to web. BR Andreas

    Read the article

  • "Has Oracle written the script for CRM success?" - Anthony Lye on Customer Experience at BAFTA

    - by Richard Lefebvre
    Anthony Lye showcased Oracle Fusion CRM at a BAFTA gathering, and MyCustomer.com covered the story under the title of "Has Oracle written the script for CRM success?' According to MyCustomer.com, "Oracle's SVP of CRM Anthony Lye set the scene for the event, suggesting products are becoming commoditized, so that the only way to differentiate is through the relationship with the customer. But he warned that "customers are more and more in control of that relationship, so you have to provide great experiences for them." "The quickest win within your organization to create a single view is to connect your marketing organization with your selling organization, align goals, processes, people and technology," Anthony explained.   "And this is a transition that is already happening - "VPs of marketing have started turning up in the same meetings as VPs of sales, we have started to see that they want to work together" - but this convergence needs nurturing." "In Fusion there are capabilities to align the organisation - we enable marketing on the same platform to build campaigns connected to sales stages. It can affect leads and opportunities at the top end of the funnel. And the selling organisation can take advantage of marketing content - the materials that are exclusively within marketing can now be used by sales. Your sales teams have been campaigning forever, but it's usually by email, it isn't aligned with the corporate message and it's being sent to people it shouldn't. By aligning them we can increase output and the quality of that output." Anthony concluded: "Operating in a disconnected fashion having two distinct systems will cost you time and money. So we feel there's a material advantage in a solution like this." Enjoy the full story at http://www.mycustomer.com/topic/marketing/has-oracle-written-script-crm-success/139958

    Read the article

  • Memcached server: Is it a good practice to point two server urls to the same server?

    - by Niro
    I have a system where there are connections to a memcache server from several different files and servers. I would like to stay with one server but keep the option of increasing the number of memcache servers (for periods of of high traffic). My idea is to tell memcache there are two servers, while the two urls will point (by DNS) to a single server. In the future if I want I can add a server and change DNS without changing the code in many places. Is this a good practice? Is there a performance cost to the fact that there are two server connections but they both point to the same server? Any other idea how to achive instant expeandability of memcache capacity without need to change the code and deploy ?

    Read the article

  • Recover Windows 7 from Linux Ubuntu?

    - by macha
    I have two partitions and have Linux Ubuntu running on one partition and Windows 7 running on the other. Now when I try to boot from Windows 7, I get an error saying the /system32/winload.exe file is corrupted or deleted. Now I have Windows 7 files on my system but do not have the DVD, I have made USB bootable with Windows 7, but when I boot it with the USB stick, a blue screen is coming on with weird errors messages. Now I am trying to restore my Windows instance to a restore point where it can work normally. How can I do that in my situation?

    Read the article

  • Ougoing telnet: Unable to connect to remote host: Connection refused

    - by brendan
    I am trying to telnet from Ubuntu server (running Maverick) on ec2 to another machine I have set up not on ec2 - we'll call it "server-x". The two machines are connected via vpn. I can ping from the ec2 machine to server-x no problem. On another machine also on the vpn but also not on ec2 I can telnet to server-x without issue so it is accepting incoming connections on that port. But when I run telnet from the ubuntu instance to server-x I get : ubuntu@ip-10-111-11-11:~$ telnet 5.1.1.1 9143 Trying 5.1.1.1... telnet: Unable to connect to remote host: Connection refused Other telnets work like this: ubuntu@ip-10-111-11-11:~$ telnet imap.gmail.com 993 Trying 173.194.76.108... Connected to gmail-imap.l.google.com. Escape character is '^]'. I have disabled ufw on the ubuntu machine. Is there anything else that can be blocking this outgoing connection? I tried adding the outgoing port to iptables but I'm not certain I'm doing that right.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #049

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Two Connections Related Global Variables Explained – @@CONNECTIONS and @@MAX_CONNECTIONS @@CONNECTIONS Returns the number of attempted connections, either successful or unsuccessful since SQL Server was last started. @@MAX_CONNECTIONS Returns the maximum number of simultaneous user connections allowed on an instance of SQL Server. The number returned is not necessarily the number currently configured. Query Editor – Microsoft SQL Server Management Studio This post may be very simple for most of the users of SQL Server 2005. Earlier this year, I have received one question many times – Where is Query Analyzer in SQL Server 2005? I wrote small post about it and pointed many users to that post – SQL SERVER – 2005 Query Analyzer – Microsoft SQL SERVER Management Studio. Recently I have been receiving similar question. OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE SQL Server 2005 has a new OUTPUT clause, which is quite useful. OUTPUT clause has access to insert and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. OUTPUT clause can generate a table variable, a permanent table, or temporary table. Even though, @@Identity will still work with SQL Server 2005, however I find the OUTPUT clause very easy and powerful to use. Let us understand the OUTPUT clause using an example. Find Name of The SQL Server Instance Based on database server stored procedures has to run different logic. We came up with two different solutions. 1) When database schema is very much changed, we wrote completely new stored procedure and deprecated older version once it was not needed. 2) When logic depended on Server Name we used global variable @@SERVERNAME. It was very convenient while writing migrating script which depended on the server name for the same database. Explanation of TRY…CATCH and ERROR Handling With RAISEERROR Function One of the developers at my company thought that we can not use the RAISEERROR function in new feature of SQL Server 2005 TRY… CATCH. When asked for an explanation he suggested SQL SERVER – 2005 Explanation of TRY… CATCH and ERROR Handling article as excuse suggesting that I did not give example of RAISEERROR with TRY…CATCH. We all thought it was funny. Just to keep records straight, TRY… CATCH can sure use RAISEERROR function. Different Types of Cache Objects Serveral kinds of objects can be stored in the procedure cache: Compiled Plans: When the query optimizer finishes compiling a query plan, the principal output is compiled plan. Execution contexts: While executing a compiled plan, SQL Server has to keep track of information about the state of execution. Cursors: Cursors track the execution state of server-side cursors, including the cursor’s current location within a resultset. Algebrizer trees: The Algebrizer’s job is to produce an algebrizer tree, which represents the logic structure of a query. Open SSMS From Command Prompt – sqlwb.exe Example This article is written by request and suggestion of Sr. Web Developer at my organization. Due to the nature of this article most of the content is referred from Book On-Line. sqlwbcommand prompt utility which opens SQL Server Management Studio. Squib command does not run queries from the command prompt. sqlcmd utility runs queries from command prompt, read for more information. 2008 Puzzle – Solution – Computed Columns Datatype Explanation Just a day before I wrote article SQL SERVER – Puzzle – Computed Columns Datatype Explanation which was inspired by SQL Server MVP Jacob Sebastian. I suggest that before continuing this article read the original puzzle question SQL SERVER – Puzzle – Computed Columns Datatype Explanation.The question was if the computed column was of datatype TINYINT how to create a Computed Column of datatype INT? 2008 – Find If Index is Being Used in Database It is very often I get a query that how to find if any index is being used in the database or not. If any database has many indexes and not all indexes are used it can adversely affect performance. If the number of indices are higher it reduces the INSERT / UPDATE / DELETE operation but increase the SELECT operation. It is recommended to drop any unused indexes from table to improve the performance. 2009 Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries If you want to see what’s going on here, I think you need to shift your point of view from an implementation-centric view to an ANSI point of view. ANSI does not guarantee processing the order. Figure 2 is interesting, but it will be potentially misleading if you don’t understand the ANSI rule-set SQL Server operates under in most cases. Implementation thinking can certainly be useful at times when you really need that multi-million row query to finish before the backup fire off, but in this case, it’s counterproductive to understanding what is going on. SQL Server Management Studio and Client Statistics Client Statistics are very important. Many a times, people relate queries execution plan to query cost. This is not a good comparison. Both parameters are different, and they are not always related. It is possible that the query cost of any statement is less, but the amount of the data returned is considerably larger, which is causing any query to run slow. How do we know if any query is retrieving a large amount data or very little data? 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 SQL SERVER – Get Query Running in Session I was recently looking for syntax where I needed a query running in any particular session. I always remembered the syntax and ha d actually written it down before, but somehow it was not coming to mind quickly this time. I searched online and I ended up on my own article written last year SQL SERVER – Get Last Running Query Based on SPID. I felt that I am getting old because I forgot this really simple syntax. Find Total Number of Transaction on Interval In one of my recent Performance Tuning assignments I was asked how do someone know how many transactions are happening on a server during certain interval. I had a handy script for the same. Following script displays transactions happened on the server at the interval of one minute. You can change the WAITFOR DELAY to any other interval and it should work. 2011 Here are two DMV’s which are newly introduced in SQL Server 2012 and provides vital information about SQL Server. DMV – sys.dm_os_volume_stats – Information about operating system volume DMV – sys.dm_os_windows_info – Information about Operating System SQL Backup and FTP – A Quick and Handy Tool I have used this tool extensively since 2009 at numerous occasion and found it to be very impressive. What separates it from the crowd the most – it is it’s apparent simplicity and speed. When I install SQLBackupAndFTP and configure backups – all in 1 or 2 minutes, my clients are always impressed. Quick Note about JOIN – Common Questions and Simple Answers In this blog post we are going to talk about join and lots of things related to the JOIN. I recently started office hours to answer questions and issues of the community. I receive so many questions that are related to JOIN. I will share a few of the same over here. Most of them are basic, but note that the basics are of great importance. 2012 Importance of User Without Login Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Preserve Leading Zero While Coping to Excel from SSMS Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1 Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video http://www.youtube.com/watch?v=TvlYy-TGaaA Importance of User Without Login – T-SQL Demo Script Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • How to set up Task Scheduler task that cleans up firefox window it opens?

    - by WilliamKF
    I have a Task Scheduler Windows 7 task that invokes firefox.exe on a URL which is an iMacro. Upon completion, the firefox window is left open. I always have firefox running, but it brings up a new window, not sure if this is a new instance of firefox, or a second window. Upon completion of the macro, I'd like the extra window closed. The extra window has a blank tab and a tab for the macro that ran. How can I set up the task to clean up after itself?

    Read the article

  • Amazon EC2: possible to use elastic load balancing across web servers in multiple regions based on location of client?

    - by Tony
    Related to an another question I asked. This question seems similar but I'm wondering if there are any updates. To support a single site that has users all over the world, I will create EC2 web servers in the US, Asia and Europe regions. The web server instances in the US and Asia regions will be backed by RDS replicas. Is it possible to load balance across these three regions? So when a customer from Spain goes to example.com, she should be routed to the EC2 instances in Europe region, a customer in Miami should be sent to the instance in Eastern US region, etc. Is this possible to do this with just AWS features? Are there docs on how to set this up?

    Read the article

  • Postfix performance

    - by Brian G
    Running postfix on ubuntu, sending alot of mail ( ~ 1 million messages ) per day. loads are extremly high but not much in terms of cpu and memory load. Anyone in a similiar situation and know how to remove the bottleneck? All mail on this server is outbound. I would have to assume the bottleneck is disk. Just an update, here is what iostat looks like: avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.12 99.88 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 12.38 0.00 2.48 0.00 118.81 48.00 0.00 0.00 0.00 0.00 sdb 1.49 22.28 72.28 42.57 629.70 1041.58 14.55 135.56 834.31 8.71 100.00 Are these numbers in line with the performance you would expect from a single disk? sdb is dedicated to postfix. I think it is queue shuffling, from incoming-active-deferred More details from questions: Server: Quad core Xeon(R) CPU E5405 @ 2.00GH with 4 GB ram Load average: 464.88, 489.11, 483.91, 4 cores. but the memory utilization and cpu is minimal Postfix instances between 16 - 32

    Read the article

  • Windows Server Configuration with Exchange, SQL Express and IIS

    - by Reafidy
    In our small office we are currently running a standalone tower server with WS 2008 R2, SQL Express and IIS. This server is going to be decommissioned and scrapped as its old and very noisy. We are going to purchase a new server with WS 2012 Standard and a heap of ram. It will still be a standalone server so it will be a domain controller, have SQL Express and IIS installed. We intend to install the hyper-v role and host a second virtual server to distribute the load. We are a small company and have only 15 staff members so its not a huge load on the server. Can a single server handle this type of installation, we don't want to purchase two servers. If so how should it be configured with regard to which software packages should be virtualized(if any). Redundancy is not a huge issue for us.

    Read the article

  • multi boot: xp + xp + xubuntu, how to?

    - by Jassano
    My laptop (with a single harddrive) currently has xp + xubuntu dual boot. I want to make that triple boot: xp + xp + xubuntu Please don't ask why, take it as given. How can I accomplish this triple boot? I tried using gparted to add a partition (worked!), used dd to clone the xp install to the new partition (worked!), edited grub (my bootloader) to list a third entry pointing to the correct device (worked!). But regardless of which of the two xp entries in grub I select I still get booted into one and the same XP. The files for the other XP show up under D: so I know they're there alright. I have edited the boot.ini on the new partition so everything looks to be in order. What do I need to do to change that and make both xp instances bootable in this scenario?

    Read the article

  • Meta-licensing of applications

    - by Gene
    I'm currently evaluating license management solutions for our customized and project-based applications, which are supported by a single server in the intranet of the customer. The applications use common functionality provided by the server (session handling, data synchronization, management capabilities, etc) and are installed on mobile devices. We allow our customers to run the applications on X devices and want to check on the server, whether the customer sticks to this limit (based on the sessions). We don't want licensing software to be installed on the devices itself (for example providing X serials to the customer) nor do we want to host an additional server for licensing in the intranet of the customer. If a client connects, our server should load the license for the application running on the client and verify, that there are sessions left. The licensing managers I looked at (12 products so far) focus on the application itself and don't allow me to implement such a floating behavior as described above. For example, this software could easily be used to create a "Standard Edition" or a "Professional Edition" of our server software, which is not our intention. In XHEO DeployLX there is a "Session Limit", which allows to limit the license to the currently established sessions in ASP.NET, which comes very close to my needs. I'm currently thinking of implementing a custom solution, which allows me to load and enforce custom-defined licenses per application on the server-side and a simple editor to define such licenses (which would contain a type and the limit itself), but I would appreciate an existing, easy to integrate commercial solution. I think it could be possible to use DeployLX for this task, but I would spend a lot of money for implementing most of the solution myself (except for the editor). Thanks in advance for any suggestions or hints. Gene

    Read the article

  • A Primer on Migrating Oracle Applications to a New Platform

    - by Nick Quarmby
    In Support we field a lot of questions about the migration of Oracle Applications to different platforms.  This article describes the techniques available for migrating an Oracle Applications environment to a new platform and discusses some of the common questions that arise during migration.  This subject has been frequently discussed in previous blog articles but there still seems to be a gap regarding the type of questions we are frequently asked in Service Requests. Some of the questions we see are quite abstract. Customers simply want to get a grip on understanding how they approach a migration. Others want to know if a particular architecture is viable. Other customers ask about mixing different platforms within a single Oracle Applications environment.    Just to clarify, throughout this article, the term 'platform' refers specifically to operating systems and not to the underlying hardware. For a clear definition of 'platform' in the context of Oracle Applications Support then Terri's very timely article:Oracle E-Business Suite Platform SmörgåsbordThe migration process is very similar for both 11i and R12 so this article only mentions specific differences where relevant.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • How to merge-copy multiple folders in Outlook?

    - by user553702
    In MS Outlook, I need to be able to incrementally copy items in multiple folders in the Exchange account to a local PST file with a mirrored folder structure. I need the items in each folder to be combined into the destination. For example, let's say on the server account I have a folder tree like this: Inbox SortedEmails1 SortedEmails2 SortedEmails3 I also have these same four folders in the local PST file, which I want to keep growing as I incrementally pull more messages from the Exchange server. Messages from "Inbox" should go to the local "Inbox", messages from "SortedEmails1" should go into "SortedEmails1" in the local PST, etc. I'd like to avoid manually iterating into every single folder and copying items. How can I do this?

    Read the article

  • Is 'Old-School' the Wrong Way to Describe Reliable Security?

    - by rickramsey
    source The Hotel Toronto apparently knows how to secure its environment. "Built directly into the bedrock in 1913, the vault features an incredible 4-foot thick steel door that weighs 40 tonnes, yet can nonetheless be moved with a single finger. During construction, the gargantuan door was hauled up Yonge Street from the harbour by a team of 18 horses. " 1913. Those were the days. Sysadmins had to be strong as bulls and willing to shovel horse maneur. At least nowadays you don't have to be that strong. And, if you happen to be trying to secure your Oracle Linux environment, you may be able to avoid the shoveling, as well. Provided you know the tricks of the trade contained in these two recently published articles. Tips for Hardening an Oracle Linux Server General strategies for hardening an Oracle Linux server. Oracle Linux comes "secure by default," but the actions you take when deploying the server can increase or decrease its security. How to minimize active services, lock down network services, and many other tips. By Ginny Henningsen, James Morris and Lenz Grimmer. Tips for Securing an Oracle Linux Environment System logging with logwatch and process accounting with psacct can help detect intrusion attempts and determine whether a system has been compromised. So can using the RPM package manager to verifying the integrity of installed software. These and other tools are described in this second article, which takes a wider perspective and gives you tips for securing your entire Oracle Linux environment. Also by the crack team of Ginny Henningsen, James Morris and Lenz Grimmer. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Go Big or Go Home

    - by Justin Kestelyn
    The Oracle Develop conference (#oracledevelop10), being co-located for the first time ever with JavaOne in San Francisco, is guaranteed to be the ultimate rush for developers this year. Where else can you go to learn about, interact with, and meet fellow devotees of the entire Oracle Development stack (welcome, Oracle Solaris)? This will also be the first time that the community space traditionally located at Oracle OpenWorld - and hosted by Oracle Technology Network, as always - will be present at the "developer" conference during this busy week. So, Oracle OpenWorld's loss is Oracle Develop's gain. And what a community space it will be: nearly 4,000 square feet for meeting space, contests and give-aways, consumption of various beverages, special speakers (Oracle ACEs among them, no doubt), and video-casting. The entire Oracle Technology Network crew will be on hand to "facilitate" your experience, of course. Even better, you can rub shoulders and share war stories with attendees from that "other" conference, JavaOne. (You have access to both conferences as a single package, so you may be having a conversation with yourself.) We call the whole enchilada "The Zone". As time goes on, we'll bring you more news about the activities described above, as well as OTN Night (which proves to be more raucous than ever), technical sessions and keynotes not to be missed, the unconference/open sessions, things to do at night, and more. In the meantime, stay in touch with us via Twitter or Oracle Mix.

    Read the article

  • Partner Webcast - Extend Your Application Reach to Mobile Devices. The Fusion Way!

    - by Thanos
    Mobile access to enterprise applications is fast becoming a standard part of corporate life. Such applications increase organizational efficiency because mobile devices are more readily at hand than their desktop counterparts. However, the speed with which mobile platforms are evolving creates challenges as enterprises define their mobile strategies. Extending Oracle Enterprise and Fusion Applications to mobile devices comes natural with Oracle Application Development Framework (ADF) Mobile, which provides all the necessary tools, services, and infrastructure to protect against technology shifts. Oracle ADF Mobile, part of Oracle ADF - the strategic, standards based framework for Oracle Fusion Applications and Oracle Fusion Middleware, is an HTML5 and Java mobile development framework that enables developers to build and extend enterprise applications for iOS and Android from a single code base. Based on a hybrid mobile architecture, ADF Mobile supports access to native device services, enables offline applications and protects enterprise investments from future technology shifts. Oracle ADF Mobile is part of Oracle ADF, the strategic, standards based framework for Oracle Fusion Applications and Oracle Fusion Middleware. Join us to find out more about Oracle ADF Mobile and how to extend your applications to tablets & mobiles building the next generation mobile applications. Agenda: Enterprise Challenges & Mobile Computing Oracle ADF Mobile Features & Benefits Visual and Declarative Development Develop Once and Deploy Java Technology & Runtime Architecture Mobile Optimized User Experience Device Services Offline Support Authentication & Security Live Demonstration Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • How to configure Linux to open files by extension?

    - by Gregory MOUSSAT
    The various Linux's desktops open files according to their mime type. This is a very nice feature but I also need to open them by extension (as with Windows). For instance, I want to open every xxxxx.vnc files with a specific program when I double-click on them. I use xfce but I don't think it differs from Gnome or KDE because all of them use the same configuration files (defaults.list and mimeapps.list). If possible the settings are user specific, not system wide. I've found some very poor informations about that, and all are system wide, so may be wiped out by some updates.

    Read the article

  • Migrated SQL Server database suddenly in "Restoring" state

    - by Pete Montgomery
    Edit: This is still a live prob, less than an hour after trying RESTORE ... WITH RECOVERY. I backed up a SQL Server 2005 database and restored it to a new SQL 2008 instance. The restore was quick and successful. Everything was fine for an hour or so. Suddenly, the database is now stuck in "(Restoring...)" state in Management Studio and has a green arrow icon, and my application login is failing! Any advice? :-) Edit: This is a live application. If I delete and try again, the hour or so's data will be lost.

    Read the article

  • Nginx & Passenger - failed (11: Resource temporarily unavailable) while connecting to upstream

    - by Toby Hede
    I have an Nginx and Passenger setup that is proving problematic. At relatively low loads the server seems to get backed up and start churning results like this into the error.log: connect() to unix:/passenger_helper_server failed (11: Resource temporarily unavailable) while connecting to upstream My passenger setup is: passenger_min_instances 2; passenger_pool_idle_time 1200; passenger_max_pool_size 20; I have done some digging, and it looks like the CPU gets pegged. Memory usage seems fine passenger_memory_stats shows at most about 700MB being used, but CPU approaches 100%. is this enough to cause this type of error? Should i bring the pool size down? Are there other configuration settings I should be looking at? Any help appreciated Other pertinent information: Amazon EC2 Small Instance Ubuntu 10.10 Nginx (latest stable) Passenger (latest stable) Rails 3.0.4

    Read the article

  • Revamped Google Webmaster Tools

    With a positive surprise I realized today that Google's Webmaster Tools had some minor overhauling and provide some more details than before. Most obvious are the changes on the dashboard where the Top Search Queries now provide information about impressions and clicktroughs instead of the rankings before. Only the links of the search expressions are missing. It seems that the Top search queries were in the focus of this update. The section is now spiced with detailed graphs about what happened during selectable periods on your site. Well, seems that the Webmaster Tools mimic a stripped-down version of Google Analytics... I was very pleased by the details that are offered when you click on a single query term. Really nice to see the search rankings and your responsible URLs at the same time. Before, you had to put two browser instances side-by-side to achieve this kind of overview. Personally, I like the approach to visualize statistics the way Google or other providers do. It gives you a quick and informative overview, and enables you to dig further into details about peaks and lows on your visits, page impressions or clickthroughs.

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >