Search Results

Search found 972 results on 39 pages for 'grant'.

Page 6/39 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • why need select privileges on *.* to use view?

    - by profy
    I have a problem using view with MySQL server 5.0 (5.0.92) I cannot use view with a user granted like that : GRANT USAGE ON *.* TO 'testuser'@'' IDENTIFIED BY PASSWORD '**********'; GRANT ALL PRIVILEGES ON `testuser`.* TO 'testuser'@''; I can create view, but when I try to select in, I have this messages : ERROR 1356 (HY000): View 'testuser.v' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them I need to "GRANT SELECT ON . TO 'testuser'@''" to make select working on the view. Why ? Do you know a solution to use VIEW's without the select privileges on . ? Thanks a lots for your answers.

    Read the article

  • Remote connect to mysql server?

    - by LF4
    I've been trying to figure out why I keep getting this error when I try to connect to the MySQL server with the following commands. $~ mysql -u username -h SQLserver -p Enter password: ERROR 1045 (28000): Access denied for user 'username'@'myIP' (using password: YES) I've done the following: Port is open in the firewall other wise I wouldn't get the error it'd just timeout. MySQL server not running with skip-networking or bind-address username has host as '%' and I can connect locally so the password is correct. GRANT USAGE ON *.* TO username@% IDENTIFIED BY 'password'; FLUSH PRIVILEGES; I wanted to know if anyone had ideas or ran into this issue before and solved it? mysql> select user, host from mysql.user where user='username'; +----------+------+ | user | host | +----------+------+ | username | % | +----------+------+ 1 row in set (0.00 sec) mysql> show grants for 'username'; +----------------------------------------------------------------------------------------------------------------+ | Grants for username@% | +----------------------------------------------------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY PASSWORD '*F42AD03PASSWORDHASHADF4021C86B' | | GRANT ALL PRIVILEGES ON `DB2`.* TO 'username'@'%' | +----------------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Should all presentational images be defined in CSS?

    - by Grant Crofton
    I've been learning (X)HTML & CSS recently, and one of the main principles is that HTML is for structure and CSS for presentation. With that in mind, it seems to me that a fair number of images on most sites are just for presentation and as such should be in the CSS (with a div or span to hold them in the HTML) - for example logos, header images, backgrounds. However, while the examples in my book put some images in CSS, they are still often in the HTML. (I'm just talking about 'presentational' images, not 'structural' ones which are a key part of the content, for example photos in a photo site). Should all such images be in CSS? Or are there technical or logical reasons to keep them in the HTML? Thanks, Grant

    Read the article

  • adding a mail contact into AD

    - by Grant Collins
    Hi, I am looking for a bit of guidence on how to create mail contacts in AD. This is a follow on question from SO Q#1861336. What I am trying to do is add a load of contact objects into an OU in Active Directory. I've been using the examples on CodeProject, however they only show how to make new user etc. How do I create a contact using c#? Is it similar to creating a new user but with different LDAP type attributes? My plan is to then run the enable-mailcontact cmdlet powershell script to enable Exchange 2010 to see the contact in the GAL. As you can see by my questions I don't usually deal with c# or Active Directory so any help/pointers would be really useful before I start playing with this loaded gun. Thanks, Grant

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • Changing the Module title in DotNetNuke on the Fly

    - by grant.barrington
    This is an extremely short post, buy hopefully it will help others in the situation. (Also if I need to find it again I know where to look) We have recently completed a project using DotNetNuke as our base. The project was for a B2B portal using Microsoft Dynamics NAV as our back-end. We had a requirement where we wanted to change the title of a Module on the fly. This ended up being pretty easy from our module. In our Page_Load event all we had to do was the following. Control ctl = DotNetNuke.Common.Globals.FindControlRecursiveDown(this.ContainerControl, "lblTitle"); if ((ctl != null)) { ((Label)ctl).Text += "- Text Added"; } Thats it. All we do it go up 1 control (this.ContainerControl) and look for the Label called "lblTitle". We then append text to the existing Module title.

    Read the article

  • Databases in Source Control

    - by Grant Fritchey
    I’ve been working as a database professional for quite a long time. But originally, I was a developer. And I loved being a developer. There was this constant feedback loop of a job well done, your code compiled and it ran. Every time this happened successfully, you’d check it into source control. These days you have to add another step; the code passed all the tests, unit, line, regression, qa, whatever, then into source control it goes. As a matter of fact, when I first made the jump from developer to DBA/database developer/database professional, source control was the one thing I couldn’t believe was missing from the DBA toolbox. Come to find out, source control was only the beginning of what was missing from your standard DBAs set of skills. Don’t get me wrong. I’m not disrespecting the DBA. They’re focused where they should be, on your production data. But there has to be a method for developing applications that include databases and the database side of that development and deployment process has long been lacking. This lack of development and deployment methodologies is a part of what has given rise to some of the wackier implementations of Object Relational Mapping tools, the NoSQL movement, and some of the other foul cursing that is directed towards databases, DBAs, and database development by application developers. Some of that is well earned. A lot isn’t. But it is a fact that database professionals, in general, do not have as sophisticated a model for managing development and deployment as application developers do. We could charge out and start trying to come up with our own standards and methods. I’m sure people have done exactly that. However, I’m lazy, and not terribly bright. Rather than try to invent a whole new process, I’m going to look to my developer roots and choose instead to emulate the developers. They’re sitting over there across the hall from me working with SCRUM/Agile/Waterfall/Object Driven/Feature Driven/Test Driven development processes that they’ve been polishing for years. What if I just started working on database development the same way they work on code development? Win! Ah, but now I have to have a mechanism for treating my database like application code. First, I need a method for getting it into source control. That’s where Red Gate’s SQL Source Control comes into the picture. SQL Source Control works within SQL Server Management Studio to connect your database objects up to the source control system of your choice. Right out of the box SQL Source Control can link to TFS, SVN or Vault. With a little work you can connect it to Git or just about any other source control system. With the ability to get my database into source control, a lot of possibilities for more direct integration with the application development teams open up.

    Read the article

  • Getting NLog Running in Partial Trust

    - by grant.barrington
    To get things working you will need to: Strong name sign the assembly Allow Partially Trusted Callers In the AssemblyInfo.cs file you will need to add the assembly attribute for “AllowPartiallyTrustedCallers” You should now be able to get NLog working as part of a partial trust installation, except that the File target won’t work. Other targets will still work (database for example)   Changing BaseFileAppender.cs to get file logging to work In the directory \Internal\FileAppenders there is a file called “BaseFileAppender.cs”. Make a change to the function call “TryCreateFileStream()”. The error occurs here: Change the function call to be: private FileStream TryCreateFileStream(bool allowConcurrentWrite) { FileShare fileShare = FileShare.Read; if (allowConcurrentWrite) fileShare = FileShare.ReadWrite; #if DOTNET_2_0 if (_createParameters.EnableFileDelete && PlatformDetector.GetCurrentRuntimeOS() != RuntimeOS.Windows) { fileShare |= FileShare.Delete; } #endif #if !NETCF try { if (PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.WindowsNT) || PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.Windows)) { return WindowsCreateFile(FileName, allowConcurrentWrite); } } catch (System.Security.SecurityException secExc) { InternalLogger.Error("Security Exception Caught in WindowsCreateFile. {0}", secExc.Message); } #endif return new FileStream(FileName, FileMode.Append, FileAccess.Write, fileShare, _createParameters.BufferSize); }   Basically we wrap the call in a try..catch. If we catch a SecurityException when trying to create the FileStream using WindowsCreateFile(), we just swallow the exception and use the native System.Io.FileStream instead.

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • JDeveloper and ADF at UKOUG

    - by Grant Ronald
    This year, Oracle ADF and JDeveloper has a big showing at the UKOUG (about 22 hours worth!!)- Europe's largest Oracle User Group.  There are three days packed with awesome ADF content delivered by some of the leading lights in ADF Developement including Duncan Mills, Frank Nimphius, Shay Shmeltzer, Susan Duncan, Lucas Jellema, Steven Davelaar, Sten Vesterli (and I'll be there as well!). Please make sure you refer to the official agenda for timings but an outline is here (if you think there are any sessions I have missed let me know and I will add them) Monday 10:00 - 10:45 - Deepdive into logical and physical data modeling with JDeveloper 10:00 - 12:15 - Debugging ADF Applications 12:15 - 13:15 - Learn ADF Task Flows in 60 Minutes 14:30 - 15:15 - ADF's Hidden Gem - the Groovy scripting language in Oracle ADF 15:25 - 16:10 - ADF Patterns for Forms Conversions 16:35 - 17:35 - Dummies Guide to Oracle ADF 16:35 - 17:35 - ADF Security Overview - Strategies and Best Practices 17:45 - 18:30 - A Methodology for Enterprise Applications with Oracle ADF Tuesday 09:00 - 10:00 - Real World Performance Tuning for Oracle ADF 11:15 - 12:15 - Keynote: Modern Development, Mobility and Rich Internet Applications 11:15 - 12:15 - Migration to Fusion Middleware 11g: Real world cases of Forms, ADF and Identity Management upgrades 14:40 - 15:20 - What's new in JDeveloper 11gR2 14:40 - 15:20 - Development Tools Roundtable 15:35 - 16:20 - ALM in Jdeveloper is exciting! 16:40 - 17:40 - Moving Oracle Forms to Oracle ADF: Case Studies Wednesday 09:00 - 10:00 - Building a Multi-Tasking ADF Application with Dynamic Regions and Dynamic Tabs 10:10 - 10:55 - Building Highly Reusable ADF Taskflows 12:30 - 13:30 - Design Patterns, Customization and Extensibility of Fusion Applications 14:25 - 15:10 - Continuous Integration with Hudson: What a year! 14:00 - 17:00 - Wednesday Wizardry with Fusion Middleware - Live application development demonstration with ADF, SOA Suite 15:20 - 16:05 - Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way  16:15 - 17:00 - Leveraging ADF for Building Complex Custom Applications

    Read the article

  • Oracle Forms 11g Customer Upgrade Reference

    - by Grant Ronald
    We have just published a reference to an Oracle customer, Callista, talking about their Forms upgrade experiences and future development plans. I'm actually seeing a huge number of Forms customers upgrading to 11g but it can take some time and effort for customers to formally agree to be a reference story, so I'm grateful to Callista for taking the time to become an 11g upgrade reference.  We have a number of other customers who are writing up their upgrade experiences and we hope to have these on OTN in the coming months. You can access this from the Forms home page on OTN.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Free Virtual Developer Day - Oracle Fusion Development

    - by Grant Ronald
    You know, there is no reason for you or your developers not to be top notch Fusion developers.  This is my third blog in a row telling you about some sort of free training.  In this case its a whole on line conference! In this on line conference you can learn about the various components that make up the Oracle Fusion Middleware development platform including Oracle ADF, Oracle WebCenter, Business Intelligence, BPM and more!  The online conference will include seminars, hands-on lab and live chats with our technical staff including me!!  And the best bit, it doesn't cost you a single penny/cent.  Its free and available right on your desktop. You have to register to attend so click here to confirm your place.

    Read the article

  • Alert for Forms customers running Oracle Forms 10g

    - by Grant Ronald
    Doesn’t time fly!  While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g. The premier support end date in December 31st 2011 and this is documented in Note: 1290974.1 available from MOS. So how much of an impact is this going to be?  Maybe not as much as you think.  While Forms 11g is based on WebLogic Server (WLS),10g was based  on OC4J.  That in itself doesn’t impact Forms much.  In most case it will simply be a recompile of your Forms source files and redeploy on WLS 11g. The Forms builder is the same in 11g as in 10g although its currently not available as a separate download from the main middleware bundle.  You can also look at a WLS Basic license option which means you shouldn’t have to shell out on upgrading to a WLS Suite license option. So what’s the proof in this being a relatively straightforward upgrade?  Well, we’ve had a big uptake of Forms 11g already (which has itself been out for over 2 years).  Read about BT Expedite’s upgrade where “The upgrade of Forms from Oracle Forms 10g to Oracle Forms 11g was relatively simple and on the whole, was just a recompilation”.  Or AMEC where “This has been one of the easiest Forms conversions we’ve ever done and it was a simple recompile in all cases” So if you are on 10g (or even earlier versions) I’d strongly consider starting your planning for an upgrade to 11g now. As always, if you have any questions about this you can post on the OTN forums.

    Read the article

  • Oracle Forms Modernization Seminar January 2011

    - by Grant Ronald
    Are you running Oracle Forms applications in your business?  Do you want to know about the future of Oracle Forms and the options you have?  How can you modernize your Oracle Forms investment? I'll be presenting at a web seminar on the 20th January and will be discussing these points.  Details of the seminar can be found here: Iseminar.pdf Registration is available here. 

    Read the article

  • Reviewing Orace ADF Enterprise Application Development Made Simple Book

    - by Grant Ronald
    Although I was a technical reviewer of Oracle ADF Enterprise Application Development-Made Simple (by Sten Vesterli) it is nice to get the finished article in your hands as a real tangible book. Personally, on a sun lounger with a Dan Brown book I can read 300 pages a day, but technical books are a different beast and I find it hard to get through them with the same vigour.  However, I'm up to chapter 7 in Sten's book and so far it's holding my interest.  He writes in an almost conversational tone and I really like the comparisons to "real world" concepts - like page templates being like gingerbread cookie cutters.  Personally I like to be able to compare or size up a new concept against something I already know. I'll post a full review next week but the good news is 212 pages in and I'm still reading!

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • New JDeveloper/ADF book hits the bookshelves

    - by Grant Ronald
    I've just received a nice new copy of Sten Vesterli's book Oracle ADF Enterprise Application Development - Made Simple.  I was one of the technical reviewers of the book but I'm looking forward to be able to read it end-to-end in good old fashioned book format this coming week. The book bridges the gap between those existing books that describe Oracle ADF features, and real world ADF development.  So, source control, bug tracking, estimating, testing, security, packaging etc are all covered.  Of course, every project and situation is different so the book could never supply a one-size-fits-all guide, but I think its a good addition to your ADF bookshelf.  I'll hopefully post a full review in the coming weeks. Oh, and congratulations Sten,  having gone through the pain of writing my own ADF book, I take my hat off to anyone who goes through the same journey!

    Read the article

  • I'm Seeing Red

    - by Grant Fritchey
    Hello World! My move into the world of Red Gate is more and more complete with my shiny, new, red, blog. The goal of this blog is not to compete with, or replace, my blog over at ScaryDBA. Instead, this blog is where I can share things I find about Red Gate products and services. I can talk about the things that we're doing at Red Gate. I can talk about the things I'm doing at Red Gate. In short, this is my Red Gate blog. I'm still the Scary DBA, but over here, I'm painted bright red (and no, I was promised that no pictures were taken of that process). So look for tips and suggestions about Red Gate products, methods to help you do your job better using one of our tools, and anything else I can think of or comment on that supports you and our excellent software.

    Read the article

  • Monitoring Your Servers

    - by Grant Fritchey
    If you are the DBA in a large scale enterprise, you’re probably already monitoring your servers for up-time and performance. But if you work for a medium-sized business, a small shop, or even a one-man operation, chances are pretty good that you’re not doing that sort of monitoring. You know that you’re supposed to be doing it, but other things, more important at-the-moment things, keep getting in the way. After all, which is more important, some monitoring or backup testing?  Backup testing, of course. Monitoring is frequently one of those things that you do when can get around to it.  Well, as you can see at the right, I have your round tuit ready to go. What if I told you that you could get monitoring on your servers for up-time, job completion, performance, all the standard stuff? And what if I told you that you wouldn’t need to install and configure another server in your environment to get it done? And what if I told you that you’d be able to set up and customize your alerts so you could know if your server was offline or a drive was full? Almost nothing for you to do, and you’ll have a full-blown monitoring process. Sounds to good to be true doesn’t it? Well, it’s coming. We’re creating an online, remote, monitoring system here at Red Gate. You’ll be able to use our SQL Monitor tool (which you can see here, monitoring SQL Server Central in real time) to keep track of your systems, but without having to set up a server and a database for storing the information collected. Instead, we’re taking advantage of services available through the internet to enable collection and storage of this information remotely, off your systems. All you have to do is install a piece of software that will communicate between our service and your servers and you’ll be off and running. It’s that easy. Before you get too excited, let me break the news that this is the near future I’m talking about. We’re setting up the program and there’s a sign-up you can use to get in on the initial tests.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >