Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 405/3184 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Bind data of interface properties only

    - by nivpenso
    I am new in all the Entity Framework models and the data bindings. I created an interface and generated a model class from my Candidate table in the db. public interface ICandidate { String ID { get; set; } string Name { get; set; } string Mail { get; set; } } i created a partial class to the generated Candidate model so i will be able to implement the ICandidate interface without changing any generated code. public partial class Candidates : ICandidate { string ICandidate.ID { get { return this.PK; } set { _PK = value; } } string ICandidate.Name { get{ return this._Name; } set { _Name = value; } } string ICandidate.Mail { get { return this._Email; } set { this._Email = value; } } } Of course, the generated class has more properties than the interface has (Like IsDeleted field that is not necessary for the interface). I want to display in a DataGridView all the candidates from the db. But I want that only the interface's properties will be shown as columns in the DataGridView. Is there a way bind only the interface's properties the the DataGridView? In my DB there is a table called Candidate_To_Company with these columns: PK, Candidate_FK, Company_FK, Insertion_Date I would like to bind this table to DataGridView. but instead of displaying Candidate_FK i would like to display the candidate name from ICandidate. Is this possible?

    Read the article

  • SSRS 2005 giving me "Invalid URI: The format of the URI could not be determined" when trying to cust

    - by Brian
    Hello, I'm getting the error "Invalid URI: The format of the URI could not be determined" when customizing it. I've made several changes to the configuration files and UI, but I keep getting this error. It isn't logging it too in the event log nor the log files, which makes it very annoying to debug. So how do I figure out where the error is coming from? Is it with the URL that's pointing to the ReportServer2005.asmx file, or something else? Updated: The specific error being logged is: aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpOnExceptions to default value of 'Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException,Microsoft.ReportingServices.Modeling.InternalModelingException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpExcludeIfContainsExceptions to default value of 'System.Data.SqlClient.SqlException,System.Threading.ThreadAbortException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing SecureConnectionLevel to default value of '1' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing DisplayErrorLink to 'True' as specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WebServiceUseFileShareStorage to default value of 'False' because it was not specified in Configuration file. aspnet_wp!ui!9!3/11/2010-15:52:52:: e ERROR: Invalid URI: The format of the URI could not be determined. aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: HTTP status code -- 500 -------Details-------- System.UriFormatException: Invalid URI: The format of the URI could not be determined. at Microsoft.SqlServer.ReportingServices2005.RSConnection.GetSecureMethods() at Microsoft.ReportingServices.UI.Global.RSWebServiceWrapper.GetSecureMethods() at Microsoft.SqlServer.ReportingServices2005.RSConnection.IsSecureMethod(String methodname) at Microsoft.SqlServer.ReportingServices2005.RSConnection.ValidateConnection() at Microsoft.ReportingServices.UI.Global.SecureAllAPI() at Microsoft.ReportingServices.UI.ReportingPage.EnsureHttpsLevel(HttpsLevel level) at Microsoft.ReportingServices.UI.ReportingPage.ReportingPage_Init(Object sender, EventArgs args) at System.EventHandler.Invoke(Object sender, EventArgs e) at System.Web.UI.Control.OnInit(EventArgs e) at System.Web.UI.Page.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: Exception in ShowErrorPage: System.Threading.ThreadAbortException: Thread was being aborted. at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) at at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) Thanks.

    Read the article

  • Issue with dynamic array Queue data structure with void pointer

    - by Nazgulled
    Hi, Maybe there's no way to solve this the way I'd like it but I don't know everything so I better ask... I've implemented a simple Queue with a dynamic array so the user can initialize with whatever number of items it wants. I'm also trying to use a void pointer as to allow any data type, but that's the problem. Here's my code: typedef void * QueueValue; typedef struct sQueueItem { QueueValue value; } QueueItem; typedef struct sQueue { QueueItem *items; int first; int last; int size; int count; } Queue; void queueInitialize(Queue **queue, size_t size) { *queue = xmalloc(sizeof(Queue)); QueueItem *items = xmalloc(sizeof(QueueItem) * size); (*queue)->items = items; (*queue)->first = 0; (*queue)->last = 0; (*queue)->size = size; (*queue)->count = 0; } Bool queuePush(Queue * const queue, QueueValue value, size_t val_sz) { if(isNull(queue) || isFull(queue)) return FALSE; queue->items[queue->last].value = xmalloc(val_sz); memcpy(queue->items[queue->last].value, value, val_sz); queue->last = (queue->last+1) % queue->size; queue->count += 1; return TRUE; } Bool queuePop(Queue * const queue, QueueValue *value) { if(isEmpty(queue)) return FALSE; *value = queue->items[queue->first].value; free(queue->items[queue->first].value); queue->first = (queue->first+1) % queue->size; queue->count -= 1; return TRUE; } The problem lies on the queuePop function. When I call it, I lose the value because I free it right away. I can't seem to solve this dilemma. I want my library to be generic and modular. The user should not care about allocating and freeing memory, that's the library's job. How can the user still get the value from queuePop and let the library handle all memory allocs/frees?

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

  • MS Access 2003 - Unbound Form uses INSERT statement to save to table; what about subforms?

    - by Justin
    So I have an unbound form that I use to save data to a table on button click. Is there a way I can have subforms for entry that will allow me to save data to the table within that same button click? Basically I want to add more entry options for the user, and while I know other ways to do it, I am particularly curious about doing it this way (if it can be done). So lets say the 'parent form' is frmMain. And there are two child forms "sub1" and "sub2". Just for example sake lets say on frmMain there are two text boxes: txtTitle & txtAuthor. sub1 and sub2 both have a text Box on them that represent something like prices. The idea is Title & author of a book, and then a price at each store (simplified). So I tried this (because I thought it was worth a shot): Dim db as DAO.database Dim sql as String sql = "INSERT INTO (Title, Author, PriceA, PriceB) VALUES (" if not isnull(me.txtTitle) then sql = sql & """" & me.txtTitle & """," Else sql = sql & " NULL," End If if not IsNull(me.txtAuthor) then sql = sql & " """ & me.txtAuthor & """," else sql = sql & " NULL," end if if not IsNull (forms!sub1.txtPrice) then sql = sql & " """ & forms!sub1.txtPrice & """," else sql = sql & " NULL," end if without finishing the code, i think you may see the GOTCHA i am headed for. I tried this and got an "Access cannot find the form "" ". I think I can pretty much see why on this approach too, because when I click the button that calls the new sub form into the parent form, the values that were just entered are not held/saved as sub1 closes and sub2 opens. I should mention that the idea above is not intended to be a one or the other approach, rather both sub forms used everytime. so this is an example. i want to use this method (if possible) to have about 7 different sub form choices in one form, and be able to save to a table via a SQL statement. I realize that there may be better ways, but I am just wondering if I can get there with this approach out of curiousity. Thanks as always!

    Read the article

  • Performing dynamic sorts on EF4 data

    - by Jaxidian
    I'm attempting to perform dynamic sorting of data that I'm putting into grids into our MVC UI. Since MVC is abstracted from everything else via WCF, I've created a couple utility classes and extensions to help with this. The two most important things (slightly simplified) are as follows: public static IQueryable<T> ApplySortOptions<T, TModel, TProperty>(this IQueryable<T> collection, IEnumerable<ISortOption<TModel, TProperty>> sortOptions) where TModel : class { var sortedSortOptions = (from o in sortOptions orderby o.Priority ascending select o).ToList(); var results = collection; foreach (var option in sortedSortOptions) { var currentOption = option; var propertyName = currentOption.Property.MemberWithoutInstance(); var isAscending = currentOption.IsAscending; if (isAscending) { results = from r in results orderby propertyName ascending select r; } else { results = from r in results orderby propertyName descending select r; } } return results; } public interface ISortOption<TModel, TProperty> where TModel : class { Expression<Func<TModel, TProperty>> Property { get; set; } bool IsAscending { get; set; } int Priority { get; set; } } I've not given you the implementation for MemberWithoutInstance() but just trust me in that it returns the name of the property as a string. :-) Following is an example of how I would consume this (using a non-interesting, basic implementation of ISortOption<TModel, TProperty>): var query = from b in CurrentContext.Businesses select b; var sortOptions = new List<ISortOption<Business, object>> { new SortOption<Business, object> { Property = (x => x.Name), IsAscending = true, Priority = 0 } }; var results = query.ApplySortOptions(sortOptions); As I discovered with this question, the problem is specific to my orderby propertyName ascending and orderby propertyName descending lines (everything else works great as far as I can tell). How can I do this in a dynamic/generic way that works properly?

    Read the article

  • Linq-to-sql Compiled Query returning object NOT belonging to submitted DataContext

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. [TestMethod] public void GetMachinesTest() { // Test Preparation (not important) using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // PASS !!!! } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // FAIL !!!! } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext.

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Solr Facet Search spring-data-solr

    - by sv1
    I am new to Solr and we are using Spring-data for Solr I have a question may be its too simple but I am unable to comprehend. Basically I need to search on any field but I have a "zip" field as one of the facet fields. I have a Solr repository . (Am not sure if the annotations on the Repository are correct.) public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } In my Service class I am trying to call this methods by Injecting MyRepository like below public class MyService { @Inject MyRepository solrRepo; @Inject SolrTemplate solrTemplate; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)); query.setFacetOptions(new FacetOptions("zip")); //Not sure how to get the Pageable object. //But the repository doesnt accept without it. return solrTemplate.queryForPage(query,{Pageable Instance to be passed here}) } From my jUnit I am loading context files needed for Solr and Spring //In jUnit the test method looks like this service.tryFacets("some value"); and it fails with - method not found in the service class at the call to the repository method. *********EDIT**************** As per ChristophStrobl's advice created a copyfield called multivaluedCopyField and the searchString argument works good. But the facet still isnt working...Now my code looks like this. I get the MyPOJO object as response but the facetcount and the faceted values are missing. public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } My service class looks like public class MyService { @Inject MyRepository solrRepo; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)).setPageRequest new PageRequest(0,5)); query.setFacetOptions(new FacetOptions("zip")); FacetPage<MyPOJO> facetedPOJO= solrRepo.findByQueryandAnno(searchString,query.getPageRequest()); return facetedPOJO; } My jUnit method call is like service.tryFacets("some value");

    Read the article

  • Change Label Control Property Based on Data from SqlDataSource Inside a Repeater

    - by Furqan Muhammad Khan
    I am using a repeater control to populate data from SqlDataSource into my custom designed display-box. <asp:Repeater ID="Repeater1" runat="server" DataSourceID="SqlDataSource1" OnDataBinding="Repeater_ItemDataBound"> <HeaderTemplate> </HeaderTemplate> <ItemTemplate> <div class="bubble-content"> <div style="float: left;"> <h2 class="bubble-content-title"><%# Eval("CommentTitle") %></h2> </div> <div style="text-align: right;"> <asp:Label ID="lbl_category" runat="server" Text=""><%# Eval("CommentType") %> </asp:Label> </div> <div style="float: left;"> <p><%# Eval("CommentContent") %></p> </div> </div> </ItemTemplate> <FooterTemplate> </FooterTemplate> </asp:Repeater> <asp:SqlDataSource ID="mySqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:myConnectionString %>" SelectCommand="SELECT [CommentTitle],[CommentType],[CommentContent] FROM [Comments] WHERE ([PostId] = @PostId)"> <SelectParameters> <asp:QueryStringParameter Name="PostId" QueryStringField="id" Type="String" /> </SelectParameters> </asp:SqlDataSource> Now, there can be three types of "CommentTypes" in the database. I want to change the CssClass property of "lbl_category" based on the value of [CommentType]. I tried doing this: <asp:Label ID="lbl_category" runat="server" CssClass="<%# Eval("CommentType") %>" Text=""><%# Eval("CommentType") %></asp:Label> But this gives an error: "The server control is not well formed" and haven't been able to find a way to achieve this in the code behind. Can someone please help?

    Read the article

  • How to build this encryption system that allows multiple users/objects.

    - by Patrick
    Hello! I am trying to figure out how to create an optimal solution for my project. I made this simple picture in Photoshop to try to illustrate the problem and how i want it (if possible). Illustrative image Ill also try to explain it based on the picture. First off we have a couple of objects to the left, these objects all get encrypted with their own encryption key (EKey on the picture) and then stored in the database. On the other side we have different users placed into roles (one user can be in a lot of roles) and the roles are associated with different objects. So one person only has access the to the objects that the role provides. So for instance Role A might have access to Object A and B. Role B have access only to Object C and Role C have access to all objects. Nothing strange in that, right? Different roles have different objects that they can access. Now to the problem part. Each user has to login with his/her username/password and then he/she gets access to the objects that his/her roles provide. All the objects are encrypted so she needs to get a decryption key somehow. I don't want to store the encryption key as a text string on the server. It should be, if possible, decrypted using the users password (along with the role) or similar. That way you have to be a user on the server in order to decrypt an object an to work with it. I was thinking about making a public/private key encryption system, but i am kinda stuck on how to give the different users the decryption key to the objects. Since i need to be able to move users to and from roles, add new users, add new roles and create/delete objects. There will be one administrator that then adds some data to allow the users in that role to get the decryption key to decrypt the object. Nothing is static and i am trying to get a picture of how this can be built or if there is a far better solution. The only criteria are: -Encrypted objects. -Decryption key should not be stored as text. -Different users have access to different objects. -Does NOT have to have roles.

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • NHibernate mapping error SQL Server 2008 Express

    - by developer
    Hi All, I tried an example from NHibernate in Action book and when I try to run the app, it throws an exception saying "Could not compile the mapping document: HelloNHibernate.Employee.hbm.xml" Below is my code, Employee.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="HelloNHibernate.Employee, HelloNHibernate" lazy="false" table="Employee"> <id name="id" access="field"> <generator class="native"/> </id> <property name="name" access="field" column="name"/> <many-to-one access="field" name="manager" column="manager" cascade="all"/> </class> Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using NHibernate; using System.Reflection; using NHibernate.Cfg; namespace HelloNHibernate { class Program { static void Main(string[] args) { CreateEmployeeAndSaveToDatabase(); UpdateTobinAndAssignPierreHenriAsManager(); LoadEmployeesFromDatabase(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } static void CreateEmployeeAndSaveToDatabase() { Employee tobin = new Employee(); tobin.name = "Tobin Harris"; using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(tobin); transaction.Commit(); } Console.WriteLine("Saved Tobin to the database"); } } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static void LoadEmployeesFromDatabase() { using (ISession session = OpenSession()) { IQuery query = session.CreateQuery("from Employee as emp order by emp.name asc"); IList<Employee> foundEmployees = query.List<Employee>(); Console.WriteLine("\n{0} employees found:", foundEmployees.Count); foreach (Employee employee in foundEmployees) Console.WriteLine(employee.SayHello()); } } static void UpdateTobinAndAssignPierreHenriAsManager() { using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { IQuery q = session.CreateQuery("from Employee where name='Tobin Harris'"); Employee tobin = q.List<Employee>()[0]; tobin.name = "Tobin David Harris"; Employee pierreHenri = new Employee(); pierreHenri.name = "Pierre Henri Kuate"; tobin.manager = pierreHenri; transaction.Commit(); Console.WriteLine("Updated Tobin and added Pierre Henri"); } } } static ISessionFactory factory; } } Employee.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace HelloNHibernate { class Employee { public int id; public string name; public Employee manager; public string SayHello() { return string.Format("'Hello World!', said {0}.", name); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate"/> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string"> Data Source=SQLEXPRESS2008;Integrated Security=True; User ID=SQL2008;Password=;initial catalog=HelloNHibernate </property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="show_sql">false</property> </session-factory> </hibernate-configuration> </configuration>

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How can I keep track of SQL Server updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

  • How can I find the space used by a SQL Transaction Log?

    - by Sean Earp
    The SQL Server sp_spaceused stored procedure is useful for finding out a database size, unallocated space, etc. However (as far as I can tell), it does not report that information for the transaction log (and looking at database properties within SQL Server Management Studio also does not provide that information for transaction logs). While I can easily find the physical space used by a transaction log by looking at the .ldf file, how can I find out how much of the log file is used and how much is unused?

    Read the article

  • How can I keep track of SQL Server cummulative updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

  • Used SQL Svr 2008 Config Manager to Set Service Account to Local System: What Did It Change?

    - by Frank Ramage
    Direct shot to foot moment... While setting-up individual non-admin accts for MSSQLSERVER services, I temporarily set Server service login to Local System account. I remembered later that: SQL Server Configuration Manager performs additional configuration such as setting permissions in the Windows Registry so that the new account can read the SQL Server settings. I want my Local System back . (Actually just restored to its original security profile) Any advice? Thanks!

    Read the article

  • How do I configure SQL Server to allow other users to access a database?

    - by Zian Choy
    Environment: Windows 7 Ultimate SQL Server 2005 Express 2 users on the computer I tried making the 2nd user a user in SQL Server (THINKPAD\2ndUser) and adding him to the database ("2ndUser"). Then, I logged in as 2ndUser and started Visual Studio 2008. When I tried to connect to the database, I got the following error message: The database '<bleep>' does not exist or you do not have permission to see it.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >