Search Results

Search found 42321 results on 1693 pages for 'sql reporting services 05'.

Page 522/1693 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Has the recent version of subversion dealt with "Access Denied" errors from windows services that mo

    - by Eric LaForce
    Does anyone know if this subversion "bug" has been dealt with? https://svn.apache.org/repos/asf/subversion/tags/1.6.9/www/faq.html#windows-access-denied I'm getting occasional "Access Denied" errors on Windows. They seem to happen at random. Why? These appear to be due to the various Windows services that monitor the filesystem for changes (anti-virus software, indexing services, the COM+ Event Notification Service). This is not really a bug in Subversion, which makes it difficult for us to fix. A summary of the current state of the investigation is available here. A workaround that should reduce the incidence rate for most people was implemented in revision 7598; if you have an earlier version, please update to the latest release. Currently I am experiencing this same behavior in version 1.5.6 when I try and do a SVN switch (I have suspected McAfee as the culprit for a while and when I saw this I feel it validates my suspicions). I read through the link given but it seems pretty old, so I didn't know if this FAQ was just outdated and the issue has actually be resolved. Thanks for any help. Configuration: SVN 1.5.6 TortoiseSVN 1.5.9 Build 15518 Windows XP SP3 32-bit

    Read the article

  • Binding a date string parameter in an MS Access PDO query

    - by harryg
    I've made a PDO database class which I use to run queries on an MS Access database. When querying using a date condition, as is common in SQL, dates are passed as a string. Access usually expects the date to be surrounded in hashes however. E.g. SELECT transactions.amount FROM transactions WHERE transactions.date = #2013-05-25#; If I where to run this query using PDO I might do the following. //instatiate pdo connection etc... resulting in a $db object $stmt = $db->prepare('SELECT transactions.amount FROM transactions WHERE transactions.date = #:mydate#;'); //prepare the query $stmt->bindValue('mydate', '2013-05-25', PDO::PARAM_STR); //bind the date as a string $stmt->execute(); //run it $result = $stmt->fetch(); //get the results As far as my understanding goes the statement that results from the above would look like this as binding a string results in it being surrounded by quotes: SELECT transactions.amount FROM transactions WHERE transactions.date = #'2013-05-25'#; This causes an error and prevents the statement from running. What's the best way to bind a date string in PDO without causing this error? I'm currently resorting to sprintf-ing the string which I'm sure is bad practise. Edit: if I pass the hash-surrounded date then I still get the error as below: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[22018]: Invalid character value for cast specification: -3030 [Microsoft][ODBC Microsoft Access Driver] Data type mismatch in criteria expression. (SQLExecute[-3030] at ext\pdo_odbc\odbc_stmt.c:254)' in C:\xampp\htdocs\ips\php\classes.php:49 Stack trace: #0 C:\xampp\htdocs\ips\php\classes.php(49): PDOStatement-execute() #1 C:\xampp\htdocs\ips\php\classes.php(52): database-execute() #2 C:\xampp\htdocs\ips\try2.php(12): database-resultset() #3 {main} thrown in C:\xampp\htdocs\ips\php\classes.php on line 49

    Read the article

  • Mac CoreLocation Services does not ask for permissions

    - by Ryan Nichols
    I'm writing a Mac App that needs to use CoreLocation services. The code and location works fine, as long as I manually authenticate the service inside the security preference pane. However the framework is not automatically popping up with a permission dialog. The documentation states: Important The user has the option of denying an application’s access to the location service data. During its initial uses by an application, the Core Location framework prompts the user to confirm that using the location service is acceptable. If the user denies the request, the CLLocationManager object reports an appropriate error to its delegate during future requests. I do get an error to my delegate, and the value of +locationServicesEnabled is correct on CLLocationManager. The only part missing is the prompt to the user about permissions. This occurs on my development MPB and a friends MBP. Neither of us can figure out whats wrong. Has anyone run into this? Relevant code: _locationManager = [CLLocationManager new]; [_locationManager setDelegate:self]; [_locationManager setDesiredAccuracy:kCLLocationAccuracyKilometer]; ... [_locationManager startUpdatingLocation]; UPDATE: Answer It seems there is a problem with Sandboxing in which the CoreLocation framework is not allowed to talk to com.apple.CoreLocation.agent. I suspect this agent is responsible for prompting the user for permissions. If you add the Location Services Entitlement (com.apple.security.personal-information.location) it only gives your app the ability to use the CL framework. However you also need access to the CoreLocation agent to ask the user for permissions. You can give your app access by adding the entitlement 'com.apple.security.temporary-exception.mach-lookup.global-name' with a value of 'com.apple.CoreLocation.agent'. Users will be prompted for access automatically like you would expect. I've filed a bug to apple on this already.

    Read the article

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • Ado.net Fill method not throwing error on running a Stored Procedure that does not exist.

    - by Mike
    I am using a combination of the Enterprise library and the original Fill method of ADO. This is because I need to open and close the command connection myself as I am capture the event Info Message Here is my code so far // Set Up Command SqlDatabase db = new SqlDatabase(ConfigurationManager.ConnectionStrings[ConnectionName].ConnectionString); SqlCommand command = db.GetStoredProcCommand(StoredProcName) as SqlCommand; command.Connection = db.CreateConnection() as SqlConnection; // Set Up Events for Logging command.StatementCompleted += new StatementCompletedEventHandler(command_StatementCompleted); command.Connection.FireInfoMessageEventOnUserErrors = true; command.Connection.InfoMessage += new SqlInfoMessageEventHandler(Connection_InfoMessage); // Add Parameters foreach (Parameter parameter in Parameters) { db.AddInParameter(command, parameter.Name, (System.Data.DbType)Enum.Parse(typeof(System.Data.DbType), parameter.Type), parameter.Value); } // Use the Old Style fill to keep the connection Open througout the population // and manage the Statement Complete and InfoMessage events SqlDataAdapter da = new SqlDataAdapter(command); DataSet ds = new DataSet(); // Open Connection command.Connection.Open(); // Populate da.Fill(ds); // Dispose of the adapter if (da != null) { da.Dispose(); } // If you do not explicitly close the connection here, it will leak! if (command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } ... Now if I pass into the variable StoredProcName = "ThisProcDoesNotExists" And run this peice of code. The CreateCommand nor da.Fill through an error message. Why is this. The only way I can tell it did not run was that it returns a dataset with 0 tables in it. But when investigating the error it is not appearant that the procedure does not exist. EDIT Upon further investigation command.Connection.FireInfoMessageEventOnUserErrors = true; is causeing the error to be surpressed into the InfoMessage Event From BOL When you set FireInfoMessageEventOnUserErrors to true, errors that were previously treated as exceptions are now handled as InfoMessage events. All events fire immediately and are handled by the event handler. If is FireInfoMessageEventOnUserErrors is set to false, then InfoMessage events are handled at the end of the procedure. What I want is each print statement from Sql to create a new log record. Setting this property to false combines it as one big string. So if I leave the property set to true, now the question is can I discern a print message from an Error ANOTHER EDIT So now I have the code so that the flag is set to true and checking the error number in the method void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e) { // These are not really errors unless the Number >0 // if Number = 0 that is a print message foreach (SqlError sql in e.Errors) { if (sql.Number == 0) { Logger.WriteInfo("Sql Message",sql.Message); } else { // Whatever this was it was an error throw new DataException(String.Format("Message={0},Line={1},Number={2},State{3}", sql.Message, sql.LineNumber, sql.Number, sql.State)); } } } The issue now that when I throw the error it does not bubble up to the statement that made the call or even the error handler that is above that. It just bombs out on that line The populate looks like // Populate try { da.Fill(ds); } catch (Exception e) { throw new Exception(e.Message, e); } Now even though I see the calling codes and methods still in the Call Stack, this exception does not seem to bubble up?

    Read the article

  • Silverlight RIA Services - Showing columns from multiple tables

    - by Villager
    Hello, I have been evaluating Silverlight and .NET RIA Services for my company. I am trying to decide if it is right for us. For the most part, I like it. But, I see one item that I am surprised that I cannot do easily. Because of this, I'm guessing I'm doing some thing wrong. To demonstrate, I have two database tables: Order ----- ID CustomerID OrderDate OrderNumber Customer -------- ID FirstName LastName Address When I create my Domain Service class, I select both of these tables. On the Silverlight application, I drag-and-drop the Order entity from the Data Sources page to my Silverlight page. When I do this, a DataGrid is added with all of the properties in the Order entity. In reality though, I would like the DataGrid to show: Order.OrderNumber Order.OrderDate Customer.FirstName Customer.LastName Because this information is spread across multiple tables, I'm not sure how to use RIA Services to show them in my Silverlight application. What is the recommended way to do this? Should I add a view in my database? Can I do it without touching the database? Thank you,

    Read the article

  • Doing a POST to a Service Operation in ADO.NET data services

    - by DataServices123
    Is it possible to POST to a service operation defined in an ADO.NET data service (it is decorated with WebInvoke)? I had no problem calling the service operation as an HTTP GET. However, when I switched to doing a POST, the stack trace consistently comes back with "Parameter cannot be NULL". I am using the jQuery syntax below, and sending the POST as JSON (though I'm not including the $.ajax call, the parameter names line up exactly). It would be possible to try this with WCF, as opposed to ADO.NET data services (newly renamed as WCF data services). However, my preference would be to use this approach first. I have tried this with and without the stringify method. Most examples online only show how to use a POST to entities (i.e. for handling CRUD operations). However, in this case we are POSTing to a Service Operation. function PostToDataService() { varType = "POST"; varUrl = "http://dev-server/MyServices/MyService.svc/SampleSO"; varData = { CustomMessage: $("#TextBox1").val() }; varData = JSON.stringify(varData); varContentType = "application/json; charset=utf-8"; varDataType = "json"; varProcessData = true; CallService(); }

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • grails services :: multiple projects

    - by naveen
    PROBLEM : I have multiple grails projects (lets say appA, appB and appC) : services to be precise I want to run them in a single grails-app.. probably a war deployment, how can i do this? REQUIREMENTS : I want this to be a single app since i am deploying it on cloud and i don't have enough memory to hold all these service instances individually. The reason for multiple grails project is scalability. So that if later on i want to run 10 instance of appA, 3 instance of appB, and 1 instance of aapC; i should be able to do that. EDIT : Can i use something like 0mq, will that be helpful in keeping the services separated from each other. How will i package my service? And reading the docs of 0mq seems that it can work with both inprocess and external process. Will async grails requests on HTTP work with 0mq in process/ external mq calls. Haven't used 0mq, but from the initial doc it seems to work. Need some experience calls in this scenario. Are there any other alternatives or mq alternatives?

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Looking for recommendations for a server-side newsletter program

    - by Sparky672
    Hello- I'm currently using a server-side SQL based mailing list program called Php-List on multiple sites and it works fairly well. But installation and setup is quite cumbersome, quirky and the interface is not well organized... neither is the code... with pieces all over the place in random fashion. Customizing the "look & feel" and full site integration are both tedious and painful. Upgrading the version is made more complex since multiple edits need to be manually transferred each time. Also, probably due to a poor English translation, descriptions and instructions within certain areas of the user interface are contradictory and unclear. You just have to play with it and remember what you did last time it worked. It's supposed to be so my customers can send out their own newsletters... after supplying a written tutorial, about half of them seem to stumble through it okay and the other half just hire me to do it for them. So not quite easy enough for most average people to use. I'm looking for something that's as easy for them as using a blog or discussion forum. It also must be easier to set up and integrate into a site than Php-List. I have no problem getting dirty and writing CSS or HTML by hand. Nor do I have any problem editing the program code. Perhaps what I'm looking for is a solution that is more organized, a better GUI, and template or "skin" based. Therefore, if I spend many hours customizing a skin, I can simply update the program and re-use my custom skin without having to reproduce the tedious setup over and over. (I currently maintain a list of about 25 things I must manually edit or add to multiple files in multiple directories each time I install or upgrade Php-List) A great example of what I'm looking for is very much like WordPress or phpBB. They're both easy to install and customize yet powerful and packed full of features. They're also VERY well organized making customization less painful. So enough yammering for now... anyone know of something, besides Php-List, with many of the same features as Php-List; maintaining a mailing list with a server-side database, custom sign-up pages, automatic opt-in opt-out, allowing custom HTML newsletter templates, etc? Thank-you!

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why am I getting Network error: 403 Forbidden in firebug for files I am not trying to access?

    - by moomoochoo
    QUESTIONs I'd like to know why I am getting Network error: 403 Forbidden in firebug for files that I am not trying to access? is it likely to cause any serious problems on the webserver? how to fix it. Why is my browser trying to access those files in the error message? DETAILS I’m using wampserver 2.2 to access a folder via the browser. The browser is on the same computer as the server. The computer is running windows 7 ultimate. When I view a web folder via my browser hXXp://localhost/folder I can see the folder contents ok but in firebug I get Network error: 403 Forbidden I’m not deliberately trying to access those files in the error msgs. You will notice they are in a completely different folder to the one I am looking at. I check the apache_error.log and see [Wed Sep 26 00:05:10 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/apache2, referer: hxxp://localhost/folder/ Wampserver 2.2 is installed on D drive. I took a look at the httpd.conf file but I couldn't find any references to c: When I look in Apache’s access.log I see 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/blank.gif HTTP/1.1" 403 217 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/back.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/text.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/unknown.gif HTTP/1.1" 403 219 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/folder.gif HTTP/1.1" 403 218 CONFIGURATION Wampserver 2.2 installed on Drive D Apache 2.2.22 PHP 5.4.3 MySQL 5.5.24 Firebug 1.10.3 Firefox 15.0.1

    Read the article

  • Word 2007 crashes on Server 2008 R2 terminal services

    - by John Rennie
    We are finding that Word 2007 (with SP2) crashes when used on a Windows 2008 R2 terminal server. Typically it crashes when you click File/Open or File/Save, but not every time. Maybe one time in four, and just to be really confusing, on a test server in my office I can't make it crash. I have just today set up a brand new shiny 2k8 R2 terminal server with as simple a setup as possible, e.g. no anti-virus to confuse things, and we're still seeing crashes. My question is has anyone else seen this, and if so any clues on what's happening? We have a support case open with Microsoft, and the MS support engineer has conceded it's happening, but has so far been unable to find the reason. On possible factor is that all the 2k8 R2 terminal servers I've seen this on have been Hyper-V VMs (running on a 2k8 R2 host). I'm about to put in a physical 2k8 R2 terminal server at the customer where we're seeing the most crashes, in case this is relevant. More news soon. Sorry if this posting seems a bit vague, but this has just bitten us and is causing a lot of pain and sleepless nights :-( If anyone can help I'll be enormously grateful! Update: we've given up and gone back to 2008 pre-R2. Both Office 2003 and 2007 both work fine now. I think there are some problems with TS in R2. Googling doesn't find much, so I thought it was just me. It's reassuring to find that someone else has seen the same problem.

    Read the article

  • git problems installing stuff [closed]

    - by dale
    root@Frenzen:~# cd root@Frenzen:~# git clone --depth 1 git://source.ffmpeg.org/ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ root@Frenzen:~# cd root@Frenzen:~# git clone --depth 1 git://source.ffmpeg.org/ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ root@Frenzen:~# git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ cd root@Frenzen:~# wget "http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz" -O ffmpeg-snapshot.tar.gz --2012-10-05 05:43:55-- http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz Resolving git.videolan.org... 2a01:e0d:1:3:58bf:fa76:0:1, 88.191.250.118 Connecting to git.videolan.org|2a01:e0d:1:3:58bf:fa76:0:1|:80... root@Frenzen:~# cd root@Frenzen:~# wget "http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz" -O ffmpeg-snapshot.tar.gz --2012-10-05 05:44:17-- http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz Resolving git.videolan.org... 2a01:e0d:1:3:58bf:fa76:0:1, 88.191.250.118 Connecting to git.videolan.org|2a01:e0d:1:3:58bf:fa76:0:1|:80...

    Read the article

  • Terminal Services printing

    - by Adam
    Hi We are using a hosted terminal server to run some applications on. We have three users who connect to the server through RDP and try to print to a networked printer called: HP Photosmart C7280. One of the users is using Windows XP Pro 32-bit on the host and when they print through the terminal server it works fine. Another one of the users is using Vista 32-bit on the host and when they print through the terminal server it works fine. The third user is using Windows 7 64-bit on the host and when they print through the terminal server it only print the first line of the page (A test page print 3/4s of the test page compared to printing all of the page when using the other 2 machines). We are only printing from Word 2007 and Excel 2007 on all machines. The server is Windows 2003. No errors in the event log. Any ideas?

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >