Search Results

Search found 47492 results on 1900 pages for 'sql management studio'.

Page 581/1900 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Are ternary operators not valid for linq-to-sql queries?

    - by KallDrexx
    I am trying to display a nullable date time in my JSON response. In my MVC Controller I am running the following query: var requests = (from r in _context.TestRequests where r.scheduled_time == null && r.TestRequestRuns.Count > 0 select new { id = r.id, name = r.name, start = DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = r.TestRequestRuns.First().end_dt.HasValue ? DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value) : string.Empty }); When I run requests.ToArray() I get the following exception: Could not translate expression ' Table(TestRequest) .Where(r => ((r.scheduled_time == null) AndAlso (r.TestRequestRuns.Count > 0))) .Select(r => new <>f__AnonymousType18`4(id = r.id, name = r.name, start = value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = IIF(r.TestRequestRuns.First().end_dt.HasValue, value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value), Invoke(value(System.Func`1[System.String])))))' into SQL and could not treat it as a local expression. If I comment out the end = line, everything seems to run correctly, so it doesn't seem to be the use of my local DateAndTimeDisplayString method, so the only thing I can think of is Linq to Sql doesn't like Ternary operators? I think I've used ternary operators before, but I can't remember if I did it in this code base or another code base (that uses EF4 instead of L2S). Is this true, or am I missing some other issue?

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

  • Using Coalesce

    - by Derek Dieter
    The coalesce function is used to find the first non-null value. The function takes limitless number of parameters in order to evaluate the first non null. If all the parameters are null, then COALESCE will also return a NULL value.-- hard coded example SELECT MyValue = COALESCE(NULL, NULL, 'abc', 123)The example above returns back [...]

    Read the article

  • Learn Linux Command Line for Web Server Management [closed]

    - by Jonathan
    I've searched high and low for a good resource for learning the Linux command line. I've found a handful of separate resources, but none that really can assist in web server management. I'm currently learning through trial an error with 'man' pages, along with Google. I was just wondering if anyone had a solid resource that they used to learn, and would be willing to share it with me. Thanks so much for your time, I really appreciate it! EDIT: I have a few CentOS servers at current, and I know the basics, I'm just trying to get to a more advanced level.

    Read the article

  • SQL Server Integration Services Connection Manager Tips and Tricks

    In this article, we will take a look at the following Tips and Tricks for Connection Managers: Adding an "Application Name" property to the connection string; Creating Two Connection Managers for each Database Connection; and Capturing Connection Manager details in Package Configurations Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • Why are there so many DB management Systems ?

    - by mr.bio
    Hi there , i always asked myself. Why are there so many DB management systems? I am not an DB expert and i never thought about using another DB than mysql. Programming languages offer different paradigms, so there it makes sense to choose a specific language for your purpose. What are factors to choose a specific DB management system ?

    Read the article

  • Expression Blend + Sketchflow Preview for Microsoft Visual Studio 2012

    - by T
    Expression Blend has released a preview version of Blend that addresses some of the missing features of the version of Expression Blend that ships with VS 2012.   Here is a download to the preview version that has a lot of the features that were missing in the shipped version.  My suggestion is that anyone that works with Xaml and VS 2012 download this version of Blend  http://www.microsoft.com/en-us/download/details.aspx?id=30702

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Between-request Garbage Collection using Passenger

    - by raphaelcm
    We're using Rails 3.0.7 and REE 1.8.7. Long-term, we will be upgrading, but at the moment it's not feasible. Following the advice of several blog posts, we've been tuning our GC, and have settings that work pretty well. But we would really like to run GC outside of the request-response cycle. I've tried patching Passenger per this post, and using the code supplied in this SO question. In both cases, GC does indeed happen between requests. However, every time the between-request GC happens, I see a bunch of this: MONGODB [INFO] Connecting... MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) Starting the New Relic Agent. Installed New Relic Browser Monitoring middleware SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` = 'pages' AND `refinery_settings`.`name` = 'use_marketable_urls' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 1 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \"false\"\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 1 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT SQL (0.0ms) SHOW TABLES RefinerySetting Load (4.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` IS NULL AND `refinery_settings`.`name` = 'user_image_sizes' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 17 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \n:small: 120x120>\n:medium: 280x280>\n:large: 580x580>\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 17 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT ******** Engine Extend: app/helpers/blog_posts_helper SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES ******** Engine Extend: app/models/user SQL (0.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES (etc, etc, etc) Which is what happens when rails "loads the world" when the app starts up. Basically, GC.start is re-loading the app for some reason. Because of this, between-request GC is much slower than inline GC. Is there a way around this? I would love to have snappy, between-request GC if possible. Thanks.

    Read the article

  • Does SQL Server have any kind of magic undo feature?

    - by Andrew G. Johnson
    Long story short is I tried to quickly update a single row in SQL Server using the Management studio and just typed UPDATE table SET column='value' and forgot the WHERE other_column='other_value' portion. Went for lunch, came back and theres 15 unread emails waiting for me. Happened about an hour ago, waiting for the database guy to come back to see when the last backup was. There's no magic UNDO feature though is there?

    Read the article

  • How to separate production and test assets during development?

    - by bcsanches
    Hi Folks, this is like a complement for Assets Management, database or versioning system?. I am wondering about how to separate development, specially programmers assets from production assets? For example, if we keep all the assets on the same repository, how do you keep with programmers assets and final game assets? Do you keep a separate directory for each of those, allowing duplicates? Or do you use some fancy scheme for striping out the "development" and "test" assets from final build?

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • Features of Visual Studio 2010

    Visualize your workspace with new multiple monitor support, powerful Web development, new SharePoint support with tons of templates and Web parts, and more accurate targeting of any version of the .NET Framework. Get set to unleash your creativity.

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >