Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 518/2505 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Basic Defensive Database Programming Techniques

    We can all recognise good-quality database code: It doesn't break with every change in the server's configuration, or on upgrade. It isn't affected by concurrent usage, or high workload. In an extract from his forthcoming book, Alex explains just how to go about producing resilient TSQL code that works, and carries on working.

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • SQL SERVER Difference Between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE

    Today, we are going to discuss about something very simple, but quite commonly confused two options of ALTER DATABASE.The first one is ALTER DATABASE …ROLLBACK IMMEDIATE and the second one is WITH NO_WAIT.Many people think they are the same or are not sure of the difference between these two options. Before we continue our explaination, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Cannot connect to windows server by name over vpn connection

    - by ErocM
    I have a rented dedicated windows server on a public ip that is acting as a SQL Server and VPN server. I need to connect to this server via computer name to get replication in place. I cannot use an ip address due to this issue: So, due to this, we are going the VPN route. That is my primary issue: After I am connected to this server's vpn, I can connect to SQL Server using the ip address but I cannot connect by the computer's name as you can see below... Right now, there is no hardware firewall on it since I had it removed to test this issue. I am running Windows 2008 Enterprise Server as the VPN server. I am not sure if the route print will help any from the workstation trying to connect but here is the info: IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.0.0.0 10.0.0.1 10.0.0.2 21 10.0.0.2 255.255.255.255 On-link 10.0.0.2 276 Any other info needed? Thanks for the help! ========= CLARIFICATION ON A FEW THINGS #1 ========= This is the server's info: This is the workstation that is trying to connect: I connect to the server via "Control Panel\Network and Internet\Network and Sharing Center\Connect or Disconnect" You can see here that I am connected: ========= CLARIFICATION ON A FEW THINGS #2 ========= I've tried to connect directly to the Sql Server as I did above but with the computers name and I couldn't get to it. Here I am trying to net view it from the workstation and it couldn't find it:

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • SQL SERVER MAXDOP Settings to Limit Query to Run on Specific CPU

    This is very simple and known tip. Query Hint MAXDOP – Maximum Degree Of Parallelism can be set to restrict query to run on a certain CPU. Please note that this query cannot restrict or dictate which CPU to be used, but for sure, it restricts the usage of number of CPUs in a single [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • structure problem in Relational DBMS creation

    - by Kane
    For learning and understanding purpose, I currently want to try to make a small relational DBMS with simple features like (for now) only sequential reading/writing and CREATE TABLE, INSERT, SELECT, UPDATE and DELETE management. I am currently on the "think" part of the project and I am stuck on the way to store the read data in memory. First I was thinking of putting them properly on a structure, but the problem is that tables are all different, know the type of each column is not an issue, but I am not sure C provide a way to make fully dynamic structure. My second and current idea is to make a simple char array of the required length and just get the data by order with cast. But I am not sure if it is the good way to do that part, so I wanted to ask for your opinion and advices about that. Thanks in advance for your help. nb: I hope my question is enough clear and understandable, I still lack of pratice in english

    Read the article

  • Lossless cutting of MPEG TS files in Windows

    - by Sebastian P.R. Gingter
    I have several HD video files in transport stream (.ts) format, recorded with my satellite receiver. I want to cut them, as in simply remove a few minutes from the beginning, the end and sometimes a few minutes in the middle of it (remove early start of recordings, late ends and, for some seldom files, the ads). What is a good, ideally but not necessarily free, software with a GUI to do this? Best would be something where you could select points on a timeline and simply cut the elements out. As a resulting file, just the same .ts format would be great, but I could also live with putting the video contents into another container, as long as the video is NOT re-encoded / transcoded. The files have additional audio streams and subtitles. These should be retained in the process. My OS is Windows.

    Read the article

  • Possibility of recovering files from a dd zero-filled hard disk

    - by unknownthreat
    I have "zero filled" (complete wiped) an external hard disk using dd, and from what I have heard: people said you should at least "zero fill" 3 times to be sure that the data are really wiped and no one can recover anything. So I decided to scan the disk once again after I've zero filled the disk. I was expecting the disk to still have some random binary left. It turned out that it has only a few sequential bytes in the very beginning. This is probably the file structure type and other headers stuff. Other than that, it's all zeros and nothing else. So if we have to recover any file from a zero filled disk, ...how? From what I've heard, even you zero fill the disk, you should still have some data left. ...or could dd really completely annihilate all data?

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Complete stack traces from Hyperic

    - by Mike Kushner
    I've setup Hyperic to run on our CI-machine, and every once in a while it reacts to some random stack trace and sends of an alert. So far so good, we've caught a lot of intermittent bugs that way. My only issue is that the alert only contains the first error line and not the entire stack trace, which requires me to access the machine and look at the logs manually. Is there any way to modify the alert message to contain more information, alternatively to include the log file in the alert mail?

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • apt-get doesnt download files from NFS location

    - by Pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • access_log item w/out IP. Starts with "::1 - - [<date>]"

    - by Meltemi
    Looking at our Apache log I see normal requests like: 174.133.xxx.xxx - - [20/May/2010:17:36:44 -0700] "GET /index.html HTTP/1.1" 200 2004 but every so often i get a cluster of these w/out an IP address. ::1 - - [20/May/2010:18:47:21 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:22 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:23 -0700] "OPTIONS * HTTP/1.0" 200 - what do they mean and curious what causes them?

    Read the article

  • weblogs.asp.net! I am here now!

    - by kaushalparik27
    Hello all webloggers!! Finally after much wait I got my blog space approved here. I really want to thank moderators (specially Terry for mail follow up) helping me out creating my weblog here. I; usually; blog about things and situation that I come across while development or something on which I succeeded to have some study/reading. Till now, I was maintaining my blog here (which I am still going to maintain in future as well!). Wishing for the best and thanks all future readers!

    Read the article

  • decrypting AES files in an apache module?

    - by Tom H
    I have a client with a security policy compliance requirement to encrypt certain files on disk. The obvious way to do this is with Device-mapper and an AES crypto module However the current system is setup to generate individual files that are encrypted. What are my options for decrypting files on-the-fly in apache? I see that mod_ssl and mod_session_crypto do encryption/decryption or something similar but not exactly what I am after. I could imagine that a PerlSetOutputFilter would work with a suitable Perl script configured, and I also see mod_ext_filter so I could just fork a unix command and decrypt the file, but they both feel like a hack. I am kind of surprised that there is no mod_crypto available...or am I missing something obvious here? Presumably resource-wise the perl filter is the way to go?

    Read the article

  • Outlook 2010 - Export of an Exchange OST to PST creates files with different sizes each time

    - by Jiri Pik
    This is a most weird issue. I have a couple of exchange OST mailboxes, and just for security, I am exporting them using File / Import / Export to a file / Export to PST file. If I run the export consecutively, it always creates files with different file sizes, WITH NO ERROR OR WARNING that something went wrong. The files should be of the same size as you run it right after the previous backup finished. I found out that if the filesize is substantially lower, then a reboot and back up can fix this up. What's your insight into this problem? What could cause that the files have different sizes and what could have caused that there is no warning? I suspected some Windows Search issue as sometimes the backup fails with a dialog error stating that Windows Search terminated the export.

    Read the article

  • Microsoft and Joyent Announce Node.js Windows Port

    With the Node.js command line tool, developers can type ?node my_app.js.' to run JavaScript programs. It gives developers a JavaScript application programming interface (API) supplies access for the network and file system as well. One instance where Node.js often comes in handy is in the creation of scalable networked programs that emphasize high concurrency and low response times. Developers who wish to use Node.js use on Windows at this time must do so running a virtual machine with Linux. Claudia Caldato, Principal Program Manager of Microsoft's Interoperability Strategy Team, offered...

    Read the article

  • Apache log - file does not exist

    - by Ivan
    I have quite a few of these in Apache logs piling up every day: [Mon Jun 09 20:42:58 2014] [error] [client 180.153.214.181] File does not exist: /home/user/public_html/ajax.googleapis.com, referer: http://www.mysite.com//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js I have over 200k visitors per day but a few of them like a dozen or so are generating the above error. I can't figure out what may be causing it. Checked the html code and it's all good so I ran out of ideas.

    Read the article

  • Cannot copy files from external USB HDD to computer

    - by Thomas Versteeg
    Hello I have a HDD disk connected with a USB converter to my computer. It consists of two partitions, the first one is mounted automatically and I can grab all the files from it, but the second one I have to mount manually as a root in the command line, if I try to open it with nautilus it gives an error. The drive where the problem is is drive sdb1, sdb2 has the same settings but works fine. I am using Debian Wheezy. This is the fstab: /dev/sdb1 /media/usb0 auto defaults,gid=disk,umask=0777 0 0 /dev/sdb2 /media/usb1 auto defaults,gid=disk,umask=0777 0 0 And when I try to copy the files with this command (as root) cp -vr /media/usb0/* /home/user/Videos/ I get these types of errors: cp: reading `/media/usb0/.lang/file.ext': Permission denied cp: failed to extend `/home/user/Videos/.lang/file.ext': Permission denied How can I at least copy the files to my main HDD? I don't need to adjust them I only need to copy them!

    Read the article

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • Domain files download upon opening

    - by Marian
    I'm having this wierd issue with my Domain. My domain is saoo.eu hosted on HostZilla. The issue is that whenever I open an html/php file it automatically downloads it instead of opening it into the browser. Example the saoo.eu/test.html page. Same thing happens with the index.html file. What is going on? Also if I want an PHP code ran into an HTML file I have to add an .htaccess file. But it doesn't seem to work. Tested it before.

    Read the article

  • Serialized values or separate table, which is more efficient?

    - by Aravind
    I have a Rails model email_condition_string with a word column in it. Now I have another model called request_creation_email_config with the following columns admin_filter_group:references vendor_service:references email_condition_string:references email_condition_string has many request_creation_email_config and request_creation_email_config belongs to email_condition_string. Instead of this model a colleague of mine is suggesting that strong the word inside the same model as comma separated values is efficient than storing as a separate model. Is that alright?

    Read the article

  • CVS gives error files fail to update, but they are updated on the server

    - by Alvin Sim
    We are using CVSNT on machine running Windows XP Pro. After using it for more than 5 years, today we have an issue with CVS. When we try to update a subdirectory in a repository, some of the files failed to update with message "Permission denied" and some files fail with the message "ABC.java is no longer in the repository". But when we login to the CVS machine itself and browse the subdirectory, we can see the files are there. We have checked the Windows folder permissions and they seem correct.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >