Search Results

Search found 79317 results on 3173 pages for 'sql error messages'.

Page 499/3173 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • T-SQL Tuesday: Personality Clashes, Style Collisions, and Differences of Opinion

    - by andyleonard
    This post is the twenty-sixth part of a ramble-rant about the software business. The current posts in this series are: Goodwill, Negative and Positive Visions, Quests, Missions Right, Wrong, and Style Follow Me Balance, Part 1 Balance, Part 2 Definition of a Great Team The 15-Minute Meeting Metaproblems: Drama The Right Question Software is Organic, Part 1 Metaproblem: Terror I Don't Work On My Car A Turning Point Human Doings Everything Changes Getting It Right The First Time One-Time Boosts Institutionalized!...(read more)

    Read the article

  • How can I stop an auto-generated Linq to SQL class from loading ALL data?

    - by Gary McGill
    I have an ASP.NET MVC project, much like the NerdDinner tutorial example. (I'm using MVC 2, but followed the NerdDinner tutorial in order to create it). As per the instructions in part 3 of the tutorial, I've created a Linq-to-SQL model of my database by creating a "Linq to SQL Classes" (.dbml) surface, and dropping my database tables onto it. The designer has automatically added relationships between the generated classes based on my database tables. Let's say that my classes are as per the NerdDinner example, so I have Dinner and RSVP tables, where each Dinner record is associated with many RSVP records - hence in the generated classes, the Dinner object has a RSVPs property which is a list of RSVP objects. My problem is this: it appears (and I'd be gladly proved wrong on this) that as soon as I access a Dinner object, it's loading all of the corresponding RSVP objects, even if I don't use the RSVPs member. First question: is this really the default behavior for the generated classes? In my particular situation, the object graph contains many more tables (which have an order of magnitude more records), and so this is disastrous behaviour - I'd be loading tons of data when all I want to do is show the details of a single parent record. Second question: are there any properties exposed through the designer UI that would let me modify this behavior? (I can't find any). Third question: I've seen a description of how to control the loading of related records in a DataContext by using a DataShape object associated with the DataContext. Is that what I'm meant to do, and if so are there any tutorials like the NerdDinner one that would show not only how to do it, but also suggest a 'pattern' for normal use?

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • MultiCast Messages to multiple clients on the same machine

    - by Christopher Chase
    Im trying to write a server/service that broadcasts a message on the lan ever second or so, Kind of like a service discovery. The message needs to be received by multiple client programs that could be on the same machine or different machines. But there could be more than one program on each machine running at the same time. Im using delphi7, with indy 9.0.18 where im stuck is if i should be using UDP or IP MultiCast or if its even possible... Ive managed to get it to work with IP Multi Cast with one client per machine, but even after many trys with different bindings.. max/min ports etc, i cant seem to find a solution.

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • Manchester SQL Server User Group has a new venue

    - by Testas
    Hi All   I am pplease to confirm the manchester user group has a new venue in partnership with BSS BSS, Westminster House, Minshull Street, off Portland Street, Manchester, M1 3HU Dates have been updated for the UG sessions, please take a look  Any questions please email me   Chris

    Read the article

  • PASS Summit 2013 - A Bunch of Blog Posts Recently

    - by RickHeiges
    Recently, there have been a number of blog posts about having the 2013 PASS Summit in Seattle or elsewhere. I had a post in November about the process and some of the major factors that were on my mind. You can read it here . There is value in moving the Summit to another venue. There is value in having the Summit in the same location/venue year after year as well. Many of the posts that I read recently make excellent arguments for each. As time goes on and you hear another good argument for one...(read more)

    Read the article

  • PASS Board of Directors Election - Making Progress

    - by RickHeiges
    It is almost time to cast your vote in this year's PASS BoD Elections. Things have changed considerably since the first PASS BoD election that I participated in. That was in 2001. I hadn't even been to a Summit or even a chpater meeting yet. I had registered for the PASS Summit 2001 (which was postponed to Jan 2002 btw). Back then, the elections were held at the summit and on paper, but there was no summit that year. If you wanted to vote, you needed to print out a ballot and fax it in. I think that...(read more)

    Read the article

  • Git: hide commit messages on remote repo

    - by Sebastian Bechtel
    Hi, I don't know how to bring my problem on the point so I try to explain it a bit ;-) When working with git on my local maschine I usually commit a lot. For this I use topic branches. Then I merge such a topic branch into a branch called develop which will be pushed to a remote repo. I always merge with --no-ff so their is always a commit for my whole topic. Now I'd like to only push this commit with a specified description what I did on the whole in this branch. I would prefer this because you can look at the commit history on the server and see directly what happend and don't need to read every single commit. And for my local work I would have the full history if I want to reset my branch or something similar. I don't know if their is a way to do this in git but it would be very useful for me so I give it a try to ask you ;-) Best regards, Sebastian

    Read the article

  • Update fails with unrecoverable dpkg fatal error

    - by Jonthue Michel
    Hello I keep receiving this error every-time i try to do some updates, I tried sudo dpkg --configure, sudo apt-get update and sudo apt-get install -f but they failed on me. installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55%dpkg: unrecoverable fatal error, aborting: failed to read on buffer copy for files list for package `libc6-i386': Is a directory

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • kill -9 + disable messages (standart output) from kill command

    - by yael
    hi all I write the following script this script enable timeout of 20 second if grep not find the relevant string in the file the script working well but the output from the script is like that: ./test: line 11: 30039: Killed how to disable this message from the kill command? how to tell kill command to ignore if process not exist? THX Yael !/bin/ksh ( sleep 20 ; [[ ! -z ps -ef | grep "qsRw -m1" | awk '{print $2}' ]] && kill -9 2/dev/null ps -ef | grep "qsRw -m1" | awk '{print $2}' ; sleep 1 ) & RESULT=$! print "the proccess:"$RESULT grep -qsRw -m1 "monitohhhhhhhr" /var if [[ $? -ne 0 ]] then print "kill "$RESULT kill -9 $RESULT fi print "ENDED" ./test the proccess:30038 ./test: line 11: 30039: Killed kill 3003

    Read the article

  • Pythonika installation error on ubuntu 12

    - by user1426913
    I have been following links: to install pythonika on ubuntu: How to install Pythonika on Ubuntu? I get error: $ sudo make -f Makefile.linux cc -c Pythonika.c -I/usr/local/Wolfram/Mathematica/9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux/CompilerAdditions -I/usr/include/python2.7/ Pythonika.c: In function ‘PyUnicodeString’: Pythonika.c:109:5: warning: passing argument 1 of ‘PyUnicodeUCS4_FromUnicode’ from incompatible pointer type [enabled by default] /usr/include/python2.7/unicodeobject.h:464:23: note: expected ‘const Py_UNICODE *’ but argument is of type ‘short unsigned int *’ Pythonika.c: In function ‘python_to_mathematica_object’: Pythonika.c:411:13: warning: passing argument 2 of ‘MLPutUnicodeString’ from incompatible pointer type [enabled by default] /usr/local/Wolfram/Mathematica/9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux/CompilerAdditions/mathlink.h:4299:1: note: expected ‘const short unsigned int *’ but argument is of type ‘Py_UNICODE ’ "/usr/local/Wolfram/Mathematica/9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux/CompilerAdditions/mprep" Pythonika.tm -o Pythonikatm.c /bin/sh: 1: /usr/local/Wolfram/Mathematica/9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux/CompilerAdditions/mprep: not found make: ** [Pythonikatm.o] Error 127

    Read the article

  • apportcheckresume recurring error and gnome shell fixations

    - by feedyourhead
    Since installing Ubuntu 12.10 Gnome remix I encounter systemm's unpredictable and unwanted behavior. Almost after each resume from suspend (or even after unlocking the screen after it goes blank) I get apportcheckresume error and message "Ubuntu 12.10 has encountered an internal error".Many times the system event wont resume and I need to restart it. Other times log in screen is not visible, the screen is blank and i have to write my password "in blanco". Sometimes additional thing also happens - textures get messed up and background and windows get distorted by horizontal lines Sorry I can't localize the log file for the errors. My system specification: Ubuntu 12.10 3.5.0-19-generic Gnome 3.6 Thinkpad T400 Graphics Mobile Intel® GM45 Express Chipset Intel® Core™2 Duo CPU P8600

    Read the article

  • Security Goes Underground

    - by BuckWoody
    You might not have heard of as many data breaches recently as in the past. As you’re probably aware, I call them out here as often as I can, especially the big ones in government and medical institutions, because I believe those can have lasting implications on a person’s life. I think that my data is personal – and I’ve seen the impact of someone having their identity stolen. It’s a brutal experience that I wouldn’t wish on anyone. So with all of that it stands to reason that I hold the data professionals to the highest standards on security. I think your first role is to ensure the data you have, number one because it can be so harmful, and number two because it isn’t yours. It belongs to the person that has that data. You might think I’m happy about that downturn in reported data losses. Well, I was, until I learned that companies have realized they suffer a lowering of their stock when they report it, but not when they don’t. So, since we all do what we are measured on, they don’t. So now, not only are they not protecting your information, they are hiding the fact that they are losing it. So take this as a personal challenge. Make sure you have a security audit on your data, and treat any breach like a personal failure. We’re the gatekeepers, so let’s keep the gates. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Speaking at Atlanta.MDF on March 12

    - by RickHeiges
    I am fortunate enough to be speaking to a user group with a really cool name - Atlanta.MDF (Microsoft Database Forum). Although I visit Atlanta often, it usually involves running from one councourse to another and rarely do I get the chance to visit the user group. I have made it to the user group on several occassions in the past, but it has been several years. This will be my first presentation to the group. I will be speaking about Database Consolidation - something I have been doing for years....(read more)

    Read the article

  • PASS By-Law Changes

    - by RickHeiges
    Over the past year, the PASS Board of Directors (BoD) has been looking at changing the by-laws. We've had in-depth in-person discussions about how the by-laws could/should be changed. Here is the link to the documents that I am referring to: http://www.sqlpass.org/Community/PASSBlog/entryid/300/Amendments-to-PASS-Bylaws.aspx One of the changes that I believe addresses more perception than reality is the rule of "No more than two from a single organization". While I personally do not believe that...(read more)

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • Backup File Naming Convention

    - by Andrew Kelly
      I have been asked this many times before and again just recently so I figured why not blog about it. None of this information outlined here is rocket science or even new but it is an area that I don’t think people put enough thought into before implementing.  Sure everyone choses some format but it often doesn’t go far enough in my opinion to get the most bang for the buck. This is the format I prefer to use: ServerName_InstanceName_BackupType_DBName_DateTimeStamp.xxx ServerName_InstanceName...(read more)

    Read the article

  • ASP.NET MVC 2: How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. (from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() }).Distinct(); Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts.

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Click No Browse: How to Navigate Objects Without Opening Them

    - by thatjeffsmith
    Oracle SQL Developer by default automatically opens the object editor when you click on an object in your connection tree or schema browser. For most folks this is very convenient. But if you are selecting objects to drag them to a model or to the worksheet, this can get annoying as the focus of the screen changes when you don’t want it to. The other scenario this feature might disrupt more than delight is when you want to click around the database in the tree and every time you click on an object, the object editor automatically changes to the selected object. You can disable this automatic browsing behavior in SQL Developer by modifying this preference: Tools Preferences Database ObjectViewer Open Object on Single Click Disable this if you don’t want an object to open when you click on it OK, I do realize my description of the problem may have confused the heck out of you just now. So instead of more words, how about a couple of animations of the object-click behavior with the option ON and OFF? Preference Disabled Click, no open. Double click, open. Preference Enabled (Default) As you click on objects, they are automatically opened

    Read the article

  • SQL MDS - Updating the Name attribute of member using Staging Table

    - by Randy Aldrich Paulo
    Creating member is usually done by populating the Member Staging Table (tblStgMember), during this process you assign a value for member code and member name. Now if you want to update the member name attribute you can do this by adding record in Attribute staging table (tblStgMemberAttribute) with Attribute Name = "Name". If you try populating the tblStgMember table it will say that the member code already exists.   INSERT INTO mdm.tblStgMemberAttribute (ModelName, EntityName, MemberType_ID, MemberCode, AttributeName, AttributeValue) VALUES (N'Product', N'Product', 1, N'BK-M101', N'Name',N'Updated Member Name Description')

    Read the article

  • JUJU and ERROR environment has no access-key or secret-key

    - by Riccardo Magrini
    following the official guide: [1]https://juju.ubuntu.com/docs/config-maas.html and considered that I've generated the ssh key (added it to UI of MAAS) and the API key, my environments.yaml file presents in this way: environments: maas: type: maas maas-server: 'http://x.x.x.x/MAAS/' maas-oauth: 'NDPA86PsEzS7bFynSy:vqJLkyHUJbvYzbtY5Q:sXXXXXXXXXXXXXXXXXXXXXX admin-secret: 'nothing' default-series: precise authorized-keys-path: ~/.ssh/id_rsa.pub # or any file you want. when I try to run the command: juju bootstrap receive the following error: ERROR environment has no access-key or secret-key Someone can explain me where is the wrong? MAAS and JUJU are installed using their ppa stable on an Ubuntu 12.04.3 Server

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >