Search Results

Search found 117232 results on 4690 pages for 'sql user group'.

Page 434/4690 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Setting Up Your SQL Server Agent Correctly

    It is important to set up SQL Server Agent Security on the principles of 'executing with minimum privileges’, and ensure that errors are properly logged and alerts are set up for a comprehensive range of errors. SQL Server Agent allows fine-grained control of every job step that should allow tasks to be run entirely safely even if they occasionally need special privileges. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • GWT - occasional com.google.gwt.user.client.rpc.SerializationException

    - by user214984
    Hello we are haunted by occasional occurences of exceptions such as: com.google.gwt.user.client.rpc.SerializationException: Type 'xxx' was not assignable to 'com.google.gwt.user.client.rpc.IsSerializable' and did not have a custom field serializer.For security purposes, this type will not be serialized.: instance = xxx at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter.serialize(ServerSerializationStreamWriter.java:610) at com.google.gwt.user.client.rpc.impl.AbstractSerializationStreamWriter.writeObject(AbstractSerializationStreamWriter.java:129) at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter$ValueWriter$8.write(ServerSerializationStreamWriter.java:152) at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter.serializeValue(ServerSerializationStreamWriter.java:534) at com.google.gwt.user.server.rpc.RPC.encodeResponse(RPC.java:609) at com.google.gwt.user.server.rpc.RPC.encodeResponseForSuccess(RPC.java:467) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:564) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at de.softconex.travicemanager.server.TraviceManagerServiceImpl.processCall(TraviceManagerServiceImpl.java:615) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:179) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:419) at org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:378) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1508) at java.lang.Thread.run(Thread.java:619) The application is normally running fine. The indicated class implements Serializable (the whole object graph). So far the only patterns / observations are: we seem to have the issue only when the application is used inside an iframe the problem seems to happen when a new version of the application has been deployed running firefox in privacy mode (disabling all caches etc.) doesn't fix the problem Any ideas? Holger

    Read the article

  • Sharepoint user details not visible to other users

    - by richardoz
    I am managing a SharePoint site that uses Form Based Authentication. We have several generic lists, document libraries and active task lists that users can create update and delete. Users can use the people pickers to select/search for everyone. But the users cannot see other users names, email addresses etc. in display lists or the people pickers. If I log in as the site collection administrator, I can see everyones details. So I know the data is available. Updated details on this problem (non-administrators) SharePoint users cannot see other users information. Example: User A assigns a task to user B. User A creates a new task and uses the people picker to find user B. User B is only visible by the login name “bname” and any information about user B is not visible or searchable within the people picker. Once user B is assigned the task, user A no longer sees the name in the task list – even though user A created it. No modified by, created by, assigned to or owner field data is visible to non-administrator users. Facts: Extranet site is configured to use Forms Based Authentication. Intranet uses windows based authentication Users of both the intranet and extranet have the same problem All databases are local The site uses SSRS integration SharePoint WSS on Windows 2003 Std -- After activating the verbose logging it looks like SharePoint is definately asking SQL server for only the user info for the currently logged in user: SELECT TOP 6 /lots-of-columns/ FROM UserData INNER MERGE JOIN Docs AS t1 ON ( 1 = 1 AND UserData.[tp_RowOrdinal] = 0 AND t1.SiteId = UserData.tp_SiteId AND t1.SiteId = @L2 AND t1.DirName = UserData.tp_DirName AND t1.LeafName = UserData.tp_LeafName AND t1.Level = UserData.tp_Level AND t1.IsCurrentVersion = 1 AND (1 = 1) ) LEFT OUTER JOIN AllUserData AS t2 ON ( UserData.[tp_Author]=t2.[tp_ID] AND UserData.[tp_RowOrdinal] = 0 AND t2.[tp_RowOrdinal] = 0 AND ( (t2.tp_IsCurrent = 1) ) AND t2.[tp_CalculatedVersion] = 0 AND t2.[tp_DeleteTransactionId] = 0x AND t2.tp_ListId = @L3 AND UserData.tp_ListId = @L4 AND t2.[tp_Author]=162 /* this is the currently logged in user */ ) WHERE (UserData.tp_IsCurrent = 1) AND UserData.tp_SiteId=@L2 AND (UserData.tp_DirName=@DN) AND UserData.tp_RowOrdinal=0 AND ( ( (UserData.[datetime1] IS NULL ) OR (UserData.[datetime1] = @L5DTP) ) AND t1.SiteId=@L2 AND (t1.DirName=@DN) ) ORDER BY UserData.[tp_Modified] Desc, UserData.[tp_ID] Asc Again, any ideas would be appreciated.

    Read the article

  • How to allow multiple inputs from user using R?

    - by Juan
    For example, if I need that the user specifies the number of rows and columns of a matrix: PROMPT: Number of rows?: USER INPUT: [a number] I need that R 'waits' for the input. Then save [a number] into a variable v1. Next, PROMPT: Number of columns?: USER INPUT: [another number] Also save [another number] into a variable v2. At the end, I will have two variables (v1, v2) that will be used in the rest of the code. "readline" only works for one input at a time. I can't run the two lines together v1 <- readline("Number of rows?: ") v2 <- readline("Number of columns?: ") Any ideas or suggestions? Thank you in advance

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Where should I store user config data? Specificaly the path to the data file?

    - by jamone
    I have an app using a SQLite db, and I need the ability for the user to move the data file and point the app to where it moved to. I used the Entity Framework to create the model, and by default it puts the connection string in the App.Config file. From what I've read if I make changes to the connection string there then they won't take effect until the app is restarted. That seems a bit clunky for my use. I see how I can init my model and pass in a custom string but I'm unsure what the best practice is in where to store basic user prefrences such as this? Ini, Registry, somewhere else? I don't want the user to have to "Open" the file each time, just when it relocates and then the app will try to auto open from then on.

    Read the article

  • ASP.Net: User control with content area, it's clearly possible but I need some details.

    - by bert
    I have seen two suggestions for my original question about whether it is possible to define a content area inside a user control and there are some helpful suggestions i.e. http://stackoverflow.com/questions/1971498/passing-in-content-to-asp-net-user-control and http://stackoverflow.com/questions/1912283/asp-net-user-control-inner-content Now, I like the theory of the latter better than the former just for aesthetic reasons. It seems to make more sense to me but the example given uses two variables content and templateContent that the answerer has not defined in their example code. Without these details I have found that the example does not work. I guess they are properties of the control? Or some such? The former example seems workable but I'd prefer to go with the latter if someone could fill in the blanks for me. Thanks.

    Read the article

  • SQL query - choosing 'last updated' record in a group, better db design?

    - by Jimmy
    Hi, Let's say I have a MySQL database with 3 tables: table 1: Persons, with 1 column ID (int) table 2: Newsletters, with 1 column ID (int) table 3: Subscriptions, with columns Person_ID (int), Newsletter_ID (int), Subscribed (bool), Updated (Datetime) Subscriptions.Person_ID points to a Person, and Subscription.Newsletter_ID points to a Newsletter. Thus, each person may have 0 or more subscriptions to 0 or more magazines at once. The table Subscriptions will also store the entire history of each person's subscriptions to each newsletter. If a particular Person_ID-Newsletter_ID pair doesn't have a row in the Subscriptions table, then it's equivalent to that pair having a subscription status of 'false'. Here is a sample dataset Persons ID 1 2 3 Newsletters ID 4 5 6 Subscriptions Person_ID Newsletter_ID Subscribed Updated 2 4 true 2010-05-01 3 4 true 2010-05-01 3 5 true 2010-05-10 3 4 false 2010-05-15 Thus, as of 2010-05-16, Person 1 has no subscription, Person 2 has a subscription to Newsletter 4, and Person 3 has a subscription to Newsletter 5. Person 3 had a subscription to Newsletter 4 for a while, but not anymore. I'm trying to do 2 kinds of query. A query that shows everyone's active subscriptions as of query time (we can assume that updated will never be in the future -- thus, this means returning the record with the latest 'updated' value for each Person_ID-Newsletter_ID pair, as long as Subscribed is true (if the latest record for a Person_ID-Newsletter_ID pair has a Subscribed status of false, then I don't want that record returned)). A query that returns all active subscriptions for a specific newsletter - same qualification as in 1. regarding records with 'false' in the Subscribed column. I don't use SQL/databases often enough to tell if this design is good, or if the SQL queries needed would be slow on a database with, say, 1M records in the Subscriptions table. I was using the Visual query builder tool in Visual Studio 2010 but I can't even get the query to return the latest updated record for each Person_ID-Newsletter_ID pair. Is it possible to come up with SQL queries that don't involve using subqueries (presumably because they would become too slow with a larger data set)? If not, would it be a better design to have a separate Subscriptions_History table, and every time a subscription status for a Person_ID-Newsletter-ID pair is added to Subscriptions, any existing record for that pair is moved to Subscriptions_History (that way the Subscriptions table only ever contains the latest status update for any Person_ID-Newsletter_ID pair)? I'm using .net on Windows, so would it be easier (or the same, or harder) to do this kind of queries using Linq? Entity Framework? Thanks!

    Read the article

  • User Interface. Multiple select with priority.

    - by Andrew Florko
    I'm designing user interface and want to ask your advises how to make it more user-friendly. Please tell any suggestions and if you have ever seen implementation of something familiar please share the link. University. There are 40+ specialities grouped into 5 faculties. User choose several he is interested in and than orders them by priority. For example I am interested in "programming microcontrollers", "system analysis" and "experimental physic". I must find them quickly in "programming faculty", select them and then order - what I prefer most and what I prefer less then others I select. Any ideas welcome :)

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • SharePoint threw "Unknown SQL Exception 206 occured." Anyone familiar with this?

    - by dalehhirt
    Our SharePoint instance threw the following errors when attempting to access data through a Content Query Tool: 04/02/2010 10:45:06.12 w3wp.exe (0x062C) 0x1734 Windows SharePoint Services Database 5586 Critical Unknown SQL Exception 206 occured. Additional error information from SQL Server is included below. Operand type clash: uniqueidentifier is incompatible with datetime 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical A runtime exception was detected. Details follow. Message: Operand type clash: uniqueidentifier is incompatible with datetime Techinal Details: System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData(... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical ...) at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlC 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception (Watson Reporting Cancelled) System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteRead... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...er(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, ... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...CommandBehavior behavior) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean& bSucceed) at Microsoft.SharePoint.Library.SPRequestInternalClass.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.Library.SPRequest.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.SPWeb.GetSiteData(SPSiteDataQuery query) at Microsoft.SharePoint.Publishing.CachedArea.GetCrossListQuery(SPSiteDataQuery query, SPWeb currentContext) at Microsoft.SharePoint.Publishing.CrossListQueryCache.GetSiteData(CachedArea cachedArea, SPWeb web, SPSiteDataQuery qu... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...ery) 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning Error occured while processing a Content Query Web Part. Performing the following query ' 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning ...ue" Type="Number"/ The farm is MOSS 2007 with SQL Server 2005 backend. Any ideas are welcomed. Dale

    Read the article

  • The User Profile Service

    - by Daryl Gill
    I tried logging into my computer after an "ungraceful" shutdown due to a power cut.. I got prompted with a message stating the user profile service failed the login. After reading a k/b on Microsoft I managed to re-login to the "corrupt" account by making registery edits.. But the next step (After having the ability to logon) is a major concern for me.. Would the guys at superuser suggest: 1) Carry on using said account 2) Re-install Windows to eliminate the possible chance of a reoccouring problem 3) Migrate all user data over to a new account and with that being said above. I had to issue the command line: net user administrator /active:yes to activate the hidden adminstrator account, would it be recommended/secure to disable the account or leave it active?

    Read the article

  • Initial deployment of ClickOnce product on corporate network

    - by MrEdmundo
    Hi there I'm a developer looking at introducing ClickOnce deployment for an internal .NET Winforms application that will be distributed via the corporate network. Currently the product roll out and updates are handled by Group Policy however I would like to control the updates via ClickOnce deployment now. What I would like to know is, how should I initially roll out the package to make sure that all users have got it. Can I use a combination of Group Policy (the roll out) and then rely on the ClickOnce deployment model for any further updates?

    Read the article

  • How to backup/restore excluding filestream varbinary in SQL Server 2008?

    - by fdierre
    There is an application used in a production site that uses SQL Server 2008 as its DBMS. The database schema uses Filestream Varbinary to save binary data on the filesystem instead of directly into the DB tables. The point is that now and then it would be useful to copy the production database on development machines, mostly for doing troubleshooting. The database is too big for comfortably moving it around, but it would be ok if it could be moved leaving out the filestream varbinary fields. In other words, I am trying to make an "imperfect" copy of a database: i.e., on the destination database, it is ok to have NULL values instead of the varbinary. Is this possible? I tried looking for the feature on the SQL Server Management studio and did a backup that excludes the filegroup containing the filestream varbinary, but I cannot restore: MSSMS complains that the restore cannot be done because the backup is incomplete (of course). Is it possible to achieve what I am trying to do in some way?

    Read the article

  • Can I setup a link SQL server connection between servers on different networks?

    - by Glenn Slaven
    We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.

    Read the article

  • SQL Server Analysis Services 2005 crash when disk is full?

    - by squillman
    One of our SQL boxes ran itself out of disk space last night. This particular server has both the database engine and analysis services on it. Database engine was not happy about having no disk space on the volume where all the data files are, but analysis services just plain died. At least, the only thing I have to blame is the full volume. Has anyone experienced a SSAS that they've been able to directly tie to no disk space? I've got nothing else in the SQL or event logs to blame...

    Read the article

  • How to switch to user perspective in Blender

    - by Cyanophage
    I've just started using Blender (v2.63) for the first time. I have used a lot of 3d modelling programs in the past but Blender's controls seem to be set up differently and it's taking a while to get used to it. The thing that is really bugging me at the moment is switching from user perspective to front/side/top views. I have my view in perspective mode and I want to switch to front view in orthographic mode and then switch back to the view I had before. It seems that the only way is to press numpad-0 to go to camera view, rotate the screen a bit to get rid of the annoying black outline and then switch it back to perspective mode. Then sometimes when I go back to front view it's in perspective mode which is annoying as you never want front view in perspective, only orthographic mode. My question: Is there a way I can be in my user view in perspective mode, switch to front/side/top views in orthographic mode and then switch back to where I was before in my user mode and be back in perspective mode?

    Read the article

  • Can a Shadow Copy of SQL 2000 databases files be used as a restore?

    - by Keith Sirmons
    Howdy, I have a SQL 2000 instance (version 8.00.760) that is on a drive that gets regular shadow copies. Can a shadow copy be used to restore the database? It seems possible to stop the SQL service, restore the Data folder from the shadow copy (includes msdb, master, model, temp, and the user databases, then restart the service. Would the files be in a crash consistent case in the worst case? If so, when restarting the service wouldn't it recover as if the power were pulled from the server? Thank you, Keith

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

  • Regedit as Current User

    - by user1013264
    I'm trying to apply a registry fix for an Outlook/O365 issue on a user's account. The issue is that "regedit" is blocked by a domain GPO. I'm able to run "gpedit" using the local admin account. Question : When I run "regedit as the local admin, am I modifying the registry for the local admin user or the domain user who's actually logged onto the workstation? I'm trying to apply the following fix: http://support.microsoft.com/kb/2843677 Also, the path for the above mentioned registry should end in " \Preferences" which is what I'm unable to locate. I'm able to navigate up until \Outlook. Any suggestions would be appreciated. Thank you. Running Outlook 2010.

    Read the article

  • Issue with user having a gmail account and Google Apps account using same email/username

    - by Joshua
    Greetings! We have a user in our organization that had been using her email address at our domain at her username for gmail.com We recently moved our folks on to Google Apps, and have just moved our email server over to Google (IMAP/SMTP). I'm having all kinds of trouble only with this user's account with her sending and receiving email via the new Google mail server and wondering if it's because of her existing Gmail account. So her email address with us is [email protected], which is her login/user id with us on our Google Apps site. She still has her gmail identity tied to that same [email protected]. She's ok with deleting her old Gmail account...if I do so however will it goof things up for her going forward with us on the Google Apps site? Will she not be able to receive email? Thanks! Joshua

    Read the article

  • Independent SharePoint Trainer in DC ~ I conduct, teacher-led SHAREPOINT user training anywhere ~

    - by technical-trainer-pro
    Your options: "*interactive" hands-on VIRTUAL or CLASSROOM style training to all SharePoint Users & Site Admin owners.* I also develop customized classes tailored to the specific design of any SharePoint Site - acting as the translator for those left to understand and use it, on an everyday basis. Audience: users,clients,stakeholders,trainers Areas: functionality,operations,management, user site customization,ITIL training, governance process,change mangement and industry or client specific scenerios. INDIVIDUAL RATE- $300 to join any class *(1)* GROUP RATE - $1500 for a private group of (6-10) Flexible Scheduling contact me : [email protected] Local to DC/MD/VA ---can train hands-on anywhere~

    Read the article

  • Moving Windows 7 profile to new user

    - by Kevin Grossnicklaus
    I have a laptop which I've been using as part of a corporate network with an AD login (and associated local profile). The laptop is loaded with Windows 7 Ultimate. I need to remove the laptop from this domain and, to start this process, I have already configured a local user on the box for me to use moving forward (granting this user the same local admin rights as the AD user). I'd like to migrate all the files, settings, etc from the local AD profile to the new non-AD profile. Is there a simple way to do this? Anything built into Win 7? As far as basic files I can probably just manually copy all the documents, pictures, music, desktop, favorites, etc... But is there a more streamlined way to move profile information? -Kevin

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >