Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 439/2505 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Paging using Linq-To-Sql based on two parameters in asp.net mvc...

    - by Pandiya Chendur
    As two parameters i say currentPage and pagesize .....I thus far used sql server stored procedures and implemented paging like this, GO ALTER PROCEDURE [dbo].[GetMaterialsInView] -- Add the parameters for the stored procedure here @CurrentPage INT, @PageSize INT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT *,ROW_NUMBER() OVER (ORDER BY Id) AS Row FROM ( SELECT *,ROW_NUMBER() OVER (ORDER BY Id) AS Row FROM InTimePagingView ) AS InTimePages WHERE Row >= (@CurrentPage - 1) * @PageSize + 1 AND Row <= @CurrentPage*@PageSize SELECT COUNT(*) as TotalCount FROM InTimePagingView SELECT CEILING(COUNT(*) / CAST(@PageSize AS FLOAT)) NumberOfPages FROM InTimePagingView END Now i am using Linq-to-sql and i use this, public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id where m.Is_Deleted == 0 select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; } Now i want to return the records,TotalCount where i use Total count to generate pagenumbers..... Is this possible... Any suggestion... EDIT: Just found this... NorthWindDataContext db = new NorthWindDataContext(); var query = from c in db.Customers select c.CompanyName; //Assuming Page Number = 2, Page Size = 10 int iPageNum = 2; int iPageSize = 10; var PagedData = query.Skip((iPageNum - 1) * iPageSize).Take(iPageSize); ObjectDumper.Write(PagedData);

    Read the article

  • New Certification Exam: "Oracle Database 12c: SQL Fundamentals" Released (1Z0-061)

    - by Brandye Barrington
    Oracle Certification begins testing this week for the new Oracle Database 12c Administrator Certified Associate (OCA) certification.  Testing for the Oracle Database 12c: SQL Fundamentals (1Z0-061) exam is now underway. Visit pearsonvue.com/oracle and register for exam 1Z0-061. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website. Earning the Oracle Database 12c Administrator Certified Associate (OCA) credential demonstrates that you carry the foundational knowledge and skills needed to administer the Oracle Database, and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional (OCP). With Oracle Database 12c, you will experience the benefits of an Oracle Database that is re-engineered for Cloud computing. Multitenant architecture brings enterprises unprecedented hardware and software efficiencies, performance and manageability benefits, and fast and efficient Cloud provisioning. Oracle Database 12c certifications emphasize the full set of skills that DBAs need in today's competitive marketplace. Be among the first to obtain this ground breaking new Oracle Certified Associate (OCA) certification by registering for this exam today. QUICK LINKS Certification Path: Oracle Database 12c Administrator Certified Associate (OCA) Certification Exam: Oracle Database 12c: SQL Fundamentals (1Z0-061) Registration: pearsonvue.com/oracle

    Read the article

  • A better way to search Connect

    - by AaronBertrand
    I recently spotted a comment from Microsoft on a Connect item with 13 total up-votes . The comment went something like, "wow, due to the explosive response to this issue, we're going to deal with it right away." Okay, it wasn't that emphatic, it was actually: "I've brought the MVP customer vote count to the attention of dev, and a new owner of this DMV says he will dig up some info for us." Still, knowing that I had seen other items with a much stronger response and barely a note of acknowledgment...(read more)

    Read the article

  • Solving the SQL Server Multiple Cascade Path Issue with a Trigger

    This tip will look at how you can use triggers to replace the functionality you get from the ON DELETE CASCADE option of a foreign key constraint. Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Ubuntu One pretends to sychronize files, but it doesn't

    - by Tom Brito
    I have my Ubuntu One account configured in both Ubuntu 11.10 and iOS (ipod-touch). The photos from the iOS were successfully uploaded, but in Ubuntu One, although it shows the "syncing" and "synchronized" marks over the icons, the files are not showing in the website (one.ubuntu.com). In short: My files are not showing in the Ubuntu One website, although the icons have the "uploaded" mark. Any idea what can be wrong here? obs1: Also, not sure if it's related, the icon-marks will show only when I open the Ubuntu One Control Panel. It shows the message "file was uploaded", but there's nothing online. obs2: The folder I'm trying to synchronize is 30mb size. And my connection is 8mbps.

    Read the article

  • How to add reflection definition to read json files on web game

    - by user3728735
    I have a game which I deployed for desktop and android, I can read json data and create my levels, but the problem is, when it comes to reading json files from web app, I get an error that logs, cannot read the json file, I researched a lot and I found out that I should add my json config class to configurations, I added this line to gameName.gwt.xml, which is in core folder <extend-configuration-property name="gdx.reflect.include" value="com.las.get.level.LevelConfig"/> but it did not work out too, I have no idea where should I place this line, or where should I change to make my web app work, so I can read json files

    Read the article

  • SSIS Virtual Class

    - by ejohnson2010
    I recorded a Virtual SSIS Class with the good folks over at SSWUG and the first airing of the class will by May 15th. This is 100% online so you can do it on your own time and from anywhere. The class will run monthly and I will be available for questions through out. You get the following 12 sessions on SSIS, each about an hour. Session 1: The SSIS Basics Session 2: Control Flow Basics Session 3: Data Flow - Sources and Destinations Session 4: Data Flow - Transformations Session 5: Advanced Transformations...(read more)

    Read the article

  • Gracefully Handling Deadlocks

    - by Derek Dieter
    In some situations, deadlocks may need to be dealt with not by changing the source of the deadlock, but by changing handling the deadlock gracefully. An example of this may be an external subscription that runs on a schedule deadlocking with another process. If the subscription deadlocks then it would be ok to [...]

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Are ternary operators not valid for linq-to-sql queries?

    - by KallDrexx
    I am trying to display a nullable date time in my JSON response. In my MVC Controller I am running the following query: var requests = (from r in _context.TestRequests where r.scheduled_time == null && r.TestRequestRuns.Count > 0 select new { id = r.id, name = r.name, start = DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = r.TestRequestRuns.First().end_dt.HasValue ? DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value) : string.Empty }); When I run requests.ToArray() I get the following exception: Could not translate expression ' Table(TestRequest) .Where(r => ((r.scheduled_time == null) AndAlso (r.TestRequestRuns.Count > 0))) .Select(r => new <>f__AnonymousType18`4(id = r.id, name = r.name, start = value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = IIF(r.TestRequestRuns.First().end_dt.HasValue, value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value), Invoke(value(System.Func`1[System.String])))))' into SQL and could not treat it as a local expression. If I comment out the end = line, everything seems to run correctly, so it doesn't seem to be the use of my local DateAndTimeDisplayString method, so the only thing I can think of is Linq to Sql doesn't like Ternary operators? I think I've used ternary operators before, but I can't remember if I did it in this code base or another code base (that uses EF4 instead of L2S). Is this true, or am I missing some other issue?

    Read the article

  • How to copy files via terminal?

    - by Levan
    This might sound silly for some people but I'm new to Linux and don't know how to use it as good as other people, yes I rad about copying files with terminal but these examples will help me a lot. So here is what I want to do: Examples: I have a file in /home/levan/kdenlive untitelds.mpg and I want to copy this file to /media/sda3/SkyDrive and do not want to delete any thing in SkyDrive directory. I have a file in /media/sda3/SkyDrive untitelds.mpg and I want to copy this file to /home/levan/kdenlive and do not want to delete any thing in kdenlive directory I want to copy a folder from home directory to sda3 and do not want to delete any thing on sda3 directory and opposite I want to cut a folder/file and copy to other place without deleting files in that directory I cut it into.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

  • Using Coalesce

    - by Derek Dieter
    The coalesce function is used to find the first non-null value. The function takes limitless number of parameters in order to evaluate the first non null. If all the parameters are null, then COALESCE will also return a NULL value.-- hard coded example SELECT MyValue = COALESCE(NULL, NULL, 'abc', 123)The example above returns back [...]

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Renaming hundreds of files at once for proper sorting

    - by Mew
    I have a ton of files, all named stuff like 1.jpg, 2.jpg, 3.jpg, and so on up to 1439.jpg. However, I have a problem with one of my projects and alphabetizing. It will usually go in the order 1.jpg, 10.jpg, 11.jpg and so on. What I need is some way (or a script) to name the files so they are in the format such as 00001.jpg all the way up to 01439.jpg. How would I be able to do this quickly and efficiently?

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >