Search Results

Search found 42327 results on 1694 pages for 'microsoft sql t sql'.

Page 499/1694 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • multi user web game with scheduled processing?

    - by Rooq
    I have an idea for a game which I am in the process of designing, but I am struggling to establish if the way I plan to implement it is possible. The game is a text based sports management simulation. This will require players to take certain actions through a web browser which will interact with a database - adding/updating and selecting. Most of the code required to be executed at this point will be fairly straightforward. The main processing will take place by applications which are scheduled to run on the server at certain times. These apps will process transactions added by the players and also perform some automatic processing based on the game date. My plan was to use an SQL server database (at last count I require about 20 tables) and VB.net for all the coding (coming from a mainframe programming background this language is the simplist for me to get to grips with). I will also need a scheduling tool on the server. Can anyone tell me if what I am planning is feasible before I dive into the actual coding stage of my project?

    Read the article

  • How do I nstall MS Office 2010 via WINE?

    - by Emeris
    I am trying to install MS Office 2010 on Ubuntu 12.04 on my new MacBook Pro (15"). I already read and followed every existing threads on forums and followed every existing tutorial, but my problem seem unique so far, since whichever solution I try, the problem remains. When I launch PlayOnLinux, two boxes appear one after the other (before the latest upgrade of Ubuntu of last week, the second box did not appear, only the first one did); the first one tells me: Error: PlayOnLinux is unable to find 32-bits OpenGL libraries. You might encounter problem with your games." When I close this window, a second one pops up, stating: Error: PlayOnLinux cannot find 7z. You should install it to use PlayOnLinux. Of course, I tried purging PlayOnLinux (uninstalling it and re-installing it). I also tried other versions of PlayOnLinux. Nothing matters: the problem remains. I did not succeed so far to install 32-bits OpenGL libraries, since I have a Radeon graphics card (which seems to be unusual) and I just cannot find these libraries. Once the two "error" boxes are closed, PlayOnLinux is open, but does not seem to work properly; when I try to install Microsoft Office 2010, nothing happens. When I try to close PlayOnLinux, it is even worse: Unity seems unable to close it (I even had a frozen screen when trying to xkill it through the terminal). I am looking forward to any tips that could help. P.S.: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler [AMD Radeon HD 6600M Series]

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • I have 5 days of vouchers for MS training… help me choose? [closed]

    - by Shyatic
    I'm a Microsoft centric guy (systems engineering side) and I already know the syntax of VB, have done VBScript pretty extensively and Excel VBA stuff as well. I want to make the leap into proper programming, probably with C# because it teaches me syntax I can use for Java if I want to go that route at some point. Since I have vouchers for 5 days of programming, and I can understand logic and understand how the .NET framework works... I would love to hear ideas on which MS Courses I should take. My primary focus is to work on web applications with web services that interact and do neat stuff... like for example, to create a 'chat' room or something interactive on the web. Or should I do something with HTML5/JS? I am really not sure... like I said, I want to work to make web services/sites. Not making the next Facebook mind you, but I'd like to work towards something in that spectrum on a much smaller scale. Please give me any advice, I'd like to book these classes asap Obviously getting involved with SQL and things that I will require would be important here.. you guys know better than me! Thanks!

    Read the article

  • Fixing SharePoint 2010 Permission Problems on Windows 7

    - by Ricardo Peres
    I had a tough time trying to have SharePoint working perfectly on a Windows 7 development machine that was occasionally disconnected from the Active Directory (when I am home I must connect through a VPN). I mostly had problems with service applications such as User Profile, Managed Metadata, Business Connectivity Services and the like, and all I knew were cryptical messages such as “access denied” or “the service or application pool is not started”. I was sure that both the services and application pools were running under a domain account that had proper permissions on the SQL Server instance, and basically it was a fresh installation. Lots of people are having the same problem, apparently. After banging my head against the wall for several days, I remembered about farm (what I had) versus stand-alone (which I had never tried) installations. Bingo! Here’s what I did: I dropped all SharePoint databases and logins and reinstalled SP from scratch, only this time not in farm mode, but as stand-alone. After the SharePoint Configuration Wizard started, I cancelled it and started the Management Shell. I created the configuration database manually by using the New-SPConfigurationDatabase cmdlet where I specified a local account – something that the Configuration Wizard wouldn’t allow me to do. Then I restarted the Configuration Wizard and everything began working perfectly! Yes, I got some pre-configured service applications and also some content which I didn’t need, but I realized it was possible to drop and recreate everything the way I wanted to. All services and application pools are now running under local accounts, which is fine for my development needs. Really, Microsoft… I hope this will bring light to someone facing the same problems!

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Saving Dragged Dropped items position on postback in asp.net [closed]

    - by Deeptechtons
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • T-SQL select where and group by date

    - by bconlon
    T-SQL has never been my favorite language, but I need to use it on a fairly regular basis and every time I seem to Google the same things. So if I add it here, it might help others with the same issues, but it will also save me time later as I will know where to look for the answers!! 1. How do I SELECT FROM WHERE to filter on a DateTime column? As it happens this is easy but I always forget. You just put the DATE value in single quotes and in standard format: SELECT StartDate FROM Customer WHERE StartDate >= '2011-01-01' ORDER BY StartDate 2. How do I then GROUP BY and get a count by StartDate? Bit trickier, but you can use the built in DATEADD and DATEDIFF to set the TIME part to midnight, allowing the GROUP BY to have a consistent value to work on: SELECT DATEADD (d, DATEDIFF(d, 0, StartDate),0) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date] Note: [Customer Creation Date] and [Number Of New Customers] column alias just provide more readable column headers. 3. Finally, how can you format the DATETIME to only show the DATE part (after all the TIME part is now always midnight)? The built in CONVERT function allows you to convert the DATETIME to a CHAR array using a specific format. The format is a bit arbitrary and needs looking up, but 101 is the U.S. standard mm/dd/yyyy, and 103 is the U.K. standard dd/mm/yyyy. SELECT CONVERT(CHAR(10), DATEADD(d, DATEDIFF(d, 0, StartDate),0), 103) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date]  #

    Read the article

  • T-SQL Tuesday #34: Help me, Obi-Wan Kenobi, You're My Only Hope!

    - by AllenMWhite
    This T-SQL Tuesday is about a person that helped you understand SQL Server. It's not a stretch to say that it's people that help you get to where you are in life, and Rob Volk ( @sql_r ) is sponsoring this month's T-SQL Tuesday asking who is that person that helped you get there. Over the years, there've been a number of people who've helped me, but one person stands out above the rest, who was patient, kind and always explained the details in a way that just made sense. I first met Don Vilen at...(read more)

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Should SQL Server tools target wide screen formats instead of portrait formats?

    - by Greg Low
    There was a short discussion on the SQL Down Under mailing list this morning about screen resolutions for working with the SQL Server tools. In particular, the issue was about how unusable the tools are on the 1366x768 resolution notebooks that now seem to be the most common. While finding a notebook with an appropriate resolution is obviously the answer at this time, I started thinking that the product itself needs to address this. SQL Server tools currently target a portrait 4:3 shape for minimum...(read more)

    Read the article

  • Database Mail and SMO are indeed supported on 64-bit, Standard Edition instances of SQL Server 2012

    - by Argenis
      This is something that comes up rather regularly at forums, so I decided to create a quick post to make sure that folks out there can feel better about SQL Server 2012. If you read this Web article, “Features Supported By Editions of SQL Server 2012” as of time of writing this post, you will see that the article points out that these two features are not supported on x64 Standard Edition. This is NOT correct. It is most definitely a documentation bug – one that unfortunately has caused some customers to sit on a waiting pattern before upgrading to SQL Server 2012. Database Mail and SMO indeed work and are fully supported on SQL Server 2012 Standard Edition x64 instances. These features work as they should. I have contacted the documentation teams internally to make sure that this is reflected on next releases of said Web article.

    Read the article

  • SSC Clinic: Can Implementing "Optimize for Ad Hoc Queries" Boost Performance for the SQLServerCentral.com and Simple-Talk.Com SQL Servers?

    With the introduction of the instance-level option “optimize for ad hoc workloads” in SQL Server 2008, DBAs have a tool to deal with a problem known as plan cache pollution, or plan cache bloat. It’s often caused when one-time use ad hoc queries are sent to SQL Server from Object-Relational Mapping (ORM) solutions, such as LINQ, NHibernate, or Entity Framework. The problem can prevent SQL Server from using its available memory optimally, potentially hurting performance. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

  • Stairway to T-SQL DML Level 8: Using the ROLLUP, CUBE and GROUPING SET operator in a GROUP BY Clause

    In this article I will be expanding on my discussion of the GROUP BY clause by exploring the ROLLUP, CUBE and GROUPING SETS operators. These additional GROUP BY operators make it is easy to have SQL Server create subtotals, grand totals, a superset of subtotals, as well as multiple aggregate groupings in a single SELECT statement. Local evaluation repository makes trying SQL Source Control simpleThe evaluation repository makes it easy to try SQL Source Control. Get started with the 28-day free trial.

    Read the article

  • How to receive alerts when you centralize your SQL Server Event Logs.

    Learn how you can get alerts when you centralize the Event log. This is part 2 of the previous article "How to centralize your SQL Server Event Logs." What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • SQL Server 2008 System Functions to Monitor the Instance, Database, Files, etc.

    SQL Server provides several system meta data functions which allow users to obtain property values of different SQL Server objects and securables. Although you can also use the SQL Server catalog views or Dynamic Management Views to obtain much of this information, in some circumstances the system meta data functions simplify the process. In this tip I am going to demonstrate some of the available system meta data functions and their usage in different scenarios.

    Read the article

  • How do I manipulate Handler Mappings cleanly in IIS7 using the Microsoft.Web.Administration namespac

    - by Kev
    I asked this over on Stack Overflow but maybe it's something an experienced IIS 7 administrator might know more about, so I'm asking here as well. When manipulating Handler Mappings using the Microsoft.Web.Administration namespace, is there a way to remove the <remove name="handler name"> tag added at the site level. For example, I have a site which inherits all the handler mappings from the global handler mappings configuration. In applicationHost.config the <location> tag initially looks like this: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> </system.webServer> </location> To remove a handler I use code similar this: string siteName = "60030 - testsite-60030.com"; string handlerToRemove = "ASPClassic"; using(ServerManager sm = new ServerManager()) { Configuration siteConfig = serverManager.GetApplicationHostConfiguration(); ConfigurationSection handlersSection = siteConfig.GetSection("system.webServer/handlers", siteName); ConfigurationElementCollection handlersCollection = handlersSection.GetCollection(); ConfigurationElement handlerElement = handlersCollection .Where(h => h["name"].Equals(handlerMapping.Name)).Single(); handlersCollection.Remove(handlerElement); } The equivalent APPCMD instruction would be: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /-[name='ASPClassic'] /commit:apphost This results in the site's <location> tag looking like: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <remove name="ASPClassic" /> </handlers> </system.webServer> </location> So far so good. However if I re-add the ASPClassic handler this results in: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <!-- Why doesn't <remove> get removed instead of tacking on an <add> directive? --> <remove name="ASPClassic" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" /> </handlers> </system.webServer> </location> This happens when using both the Microsoft.Web.Administration namespace and C# or using the following APPCMD command: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /+[name='ASPClassic',path='*.asp',verb=;'GET,HEAD,POST',modules='IsapiModule',scriptProcessor='%windir%\system32\inetsrv\asp.dll',resourceType='File'] /commit:apphost This can result in a lot of cruft over time for each website that's had a handler removed then re-added programmatically. Is there a way to just remove the <remove name="ASPClassic" /> tag using the Microsoft.Web.Administration namespace code or APPCMD?

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >