Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 598/2505 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • What is the best way to read the uploaded files from Request.Files, StreamReader or BinaryReader or

    - by ramesh.nagul
    I have a form where the user can upload multiple files. I am using MVC 2.0 and in my controller I need to call a webservice that is a common import interface requires the files to passed in as byte[]. .NET exposes Request.Files as a HttpFileCollectionBase and I access the filehandle using HttpPostedFile or HttpPostedFileBase that provides access to the Stream member. What is the best way for me to read the bytes from the stream? BinaryReader? StreamReader? BufferedStream?

    Read the article

  • Enumerating File Handles in C#

    - by user293392
    I would like to know whether it is possible to enumerate file handles in c#, maybe using Win32API. This is easily done for window and process handles, but it seems that it is not possible for file handles. While some functionality is offered by NtQuerySystemInformation, this is being phased out and therefore it is not recommended to use such a method.

    Read the article

  • ActionScript 3 class over several files - how?

    - by Poni
    So, how do we write a class over several files in action script 3? In C# there's the "partial" keyword. In C++ it's natural (you just "#include ..." all files). In Flex 3, in a component, you add this tag: <mx:Script source="myfile.as"/>. How do I split the following class into several files; package package_path { public class cSplitMeClass { public function cSplitMeClass() { } public function doX():void { // .... } public function doY():void { // .... } } } For example I want to have the doX() and doY() functions implemented in another ".as" file. Can I do this? And please, don't tell me something like "a good practice is to have them in one file" :)

    Read the article

  • Latex: Extracting the sty files of all the used packages

    - by Zlatko
    Hi. So after writhing a large .tex file and using many packages I want to archive everything. Not just the .tex .jpg files but also the .sty files. This is because sometimes some options in the sty files are changed, and then I can't compile the file. The "problem" is that in using Ubuntu, I already installed all the packages in my system. I don't want to have to copy the manually. Is there a program that can do this automatically. Tnx.

    Read the article

  • MS Query Analyzer / Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analyzer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Python: Decent config file format

    - by miracle2k
    I'd like to use a configuration file format which supports key value pairs and nestable, repeatable structures, and which is as light on syntax as possible. I'm imagining something along the lines of: cachedir = /var/cache mail_to = [email protected] job { name = my-media frequency = 1 day source { from = /home/michael/Images source { } source { } } job { } I'd be happy with something using significant-whitespace as well. JSON requires too many explicit syntax rules (quoting, commas, etc.). YAML is actually pretty good, but would require the jobs to be defined as a YAML list, which I find slightly awkward to use.

    Read the article

  • How to keep g++ from taking header file from /usr/include?

    - by WilliamKF
    I am building using zlib.h which I have a local copy to v1.2.5, but in /usr/include/zlib.h there is v1.2.1.2. If I omit adding -I/my/path/to/zlib to my make I get error from using old version which doesn't have Z_FIXED: g++ -g -Werror -Wredundant-decls -D_FILE_OFFSET_BITS=64 -c -o ARCH.linux_26_i86/debug/sysParam.o sysParam.cpp sysParam.cpp: In member function `std::string CSysParamAccess::getCompressionStrategyName() const': sysParam.cpp:1816: error: `Z_FIXED' was not declared in this scope sysParam.cpp: In member function `bool CSysParamAccess::setCompressionStrategy(const std::string&, paramSource)': sysParam.cpp:1849: error: `Z_FIXED' was not declared in this scope Alternatively, if I add the include path to the zlib z1.2.5 I am using, I get double defines, it seems as if the zlib.h is included twice with two different sets of -D values, but I don't see how that is happening: g++ -g -Werror -Wredundant-decls -I../../src/zlib-1.2.5 -D_FILE_OFFSET_BITS=64 -c -o ARCH.linux_26_i86/debug/sysParam.o sysParam.cpp In file included from sysParam.cpp:24: ../../src/zlib-1.2.5/zlib.h:1582: warning: redundant redeclaration of `void* gzopen64(const char*, const char*)' in same scope ../../src/zlib-1.2.5/zlib.h:1566: warning: previous declaration of `void* gzopen64(const char*, const char*)' ../../src/zlib-1.2.5/zlib.h:1583: warning: redundant redeclaration of `long long int gzseek64(void*, long long int, int)' in same scope ../../src/zlib-1.2.5/zlib.h:1567: warning: previous declaration of `off64_t gzseek64(void*, off64_t, int)' ../../src/zlib-1.2.5/zlib.h:1584: warning: redundant redeclaration of `long long int gztell64(void*)' in same scope ../../src/zlib-1.2.5/zlib.h:1568: warning: previous declaration of `off64_t gztell64(void*)' ../../src/zlib-1.2.5/zlib.h:1585: warning: redundant redeclaration of `long long int gzoffset64(void*)' in same scope ../../src/zlib-1.2.5/zlib.h:1569: warning: previous declaration of `off64_t gzoffset64(void*)' ../../src/zlib-1.2.5/zlib.h:1586: warning: redundant redeclaration of `uLong adler32_combine64(uLong, uLong, long long int)' in same scope ../../src/zlib-1.2.5/zlib.h:1570: warning: previous declaration of `uLong adler32_combine64(uLong, uLong, off64_t)' ../../src/zlib-1.2.5/zlib.h:1587: warning: redundant redeclaration of `uLong crc32_combine64(uLong, uLong, long long int)' in same scope ../../src/zlib-1.2.5/zlib.h:1571: warning: previous declaration of `uLong crc32_combine64(uLong, uLong, off64_t)' Here some of the relavent lines from zlib.h referred to above: // This would be line 1558 of zlib.h /* provide 64-bit offset functions if _LARGEFILE64_SOURCE defined, and/or * change the regular functions to 64 bits if _FILE_OFFSET_BITS is 64 (if * both are true, the application gets the *64 functions, and the regular * functions are changed to 64 bits) -- in case these are set on systems * without large file support, _LFS64_LARGEFILE must also be true */ #if defined(_LARGEFILE64_SOURCE) && _LFS64_LARGEFILE-0 ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *)); ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int)); ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile)); ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile)); ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off64_t)); ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off64_t)); #endif #if !defined(ZLIB_INTERNAL) && _FILE_OFFSET_BITS-0 == 64 && _LFS64_LARGEFILE-0 # define gzopen gzopen64 # define gzseek gzseek64 # define gztell gztell64 # define gzoffset gzoffset64 # define adler32_combine adler32_combine64 # define crc32_combine crc32_combine64 # ifdef _LARGEFILE64_SOURCE ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *)); ZEXTERN z_off_t ZEXPORT gzseek64 OF((gzFile, z_off_t, int)); ZEXTERN z_off_t ZEXPORT gztell64 OF((gzFile)); ZEXTERN z_off_t ZEXPORT gzoffset64 OF((gzFile)); ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off_t)); ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off_t)); # endif #else ZEXTERN gzFile ZEXPORT gzopen OF((const char *, const char *)); ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile, z_off_t, int)); ZEXTERN z_off_t ZEXPORT gztell OF((gzFile)); ZEXTERN z_off_t ZEXPORT gzoffset OF((gzFile)); ZEXTERN uLong ZEXPORT adler32_combine OF((uLong, uLong, z_off_t)); ZEXTERN uLong ZEXPORT crc32_combine OF((uLong, uLong, z_off_t)); #endif // This would be line 1597 of zlib.h I'm not sure how to track this down further. I tried moving the include of zlib.h to the top and bottom of the includes list of the cpp file, but it made no difference. An excerpt of passing -E to g++ shows in part: extern int inflateInit2_ (z_streamp strm, int windowBits, const char *version, int stream_size); extern int inflateBackInit_ (z_streamp strm, int windowBits, unsigned char *window, const char *version, int stream_size); # 1566 "../../src/zlib-1.2.5/zlib.h" extern gzFile gzopen64 (const char *, const char *); extern off64_t gzseek64 (gzFile, off64_t, int); extern off64_t gztell64 (gzFile); extern off64_t gzoffset64 (gzFile); extern uLong adler32_combine64 (uLong, uLong, off64_t); extern uLong crc32_combine64 (uLong, uLong, off64_t); # 1582 "../../src/zlib-1.2.5/zlib.h" extern gzFile gzopen64 (const char *, const char *); extern long long gzseek64 (gzFile, long long, int); extern long long gztell64 (gzFile); extern long long gzoffset64 (gzFile); extern uLong adler32_combine64 (uLong, uLong, long long); extern uLong crc32_combine64 (uLong, uLong, long long); # 1600 "../../src/zlib-1.2.5/zlib.h" struct internal_state {int dummy;}; Not sure why lines 1566 and 1582 are coming out together in the CPP output, but hence the warning about duplicate declarations.

    Read the article

  • PHP is unable to open a file for writing - but it does exist

    - by asdasdas
    I am trying to write to a file. I do a file_exists check on it before i do fopen and it comes up true: the file does exist. However, the file fails this code and gives me the error every time: $handle = fopen($filename, 'w'); if($handle) { flock($handle, LOCK_EX); fwrite($handle, $contents); } else { echo 'ERROR: Unable to open the file for writing.',PHP_EOL; exit(); } flock($handle, LOCK_UN); fclose($handle); Is there a way I can get more specific error details as to why this file does not let me open it for writing? I know that the filename is legit, but for some reason it just wont let me write to it. I do have write permissions, I was able to write and write over another file.

    Read the article

  • Carrier Wave not completing upload to Rackspace Cloud Files

    - by Zack Fernandes
    Hello, I have been attempting to get file uploads to Rackspace Cloud Files online all night, and finally tried the Carrierwave Plugin. Although the plugin worked right away, when I tried viewing the file uploaded (an image) it was broken. Upon further testing, I found out that files would upload to Cloud Files, however were just a fraction of their original size. I can't seem to figure out what's worng, and any help would be greatly appreciated. My code is as follows. models\attachment.rb class Attachment < ActiveRecord::Base attr_accessible :title, :user_id, :file, :remote_file_url, :file_cache, :remove_file belongs_to :user mount_uploader :file, AttachmentUploader end uploaders\attachment_uploader.rb class AttachmentUploader < CarrierWave::Uploader::Base storage :cloud_files def store_dir "#{model.user_id}-#{model.id}" end end

    Read the article

  • check params['Filedata'] in rails.

    - by krunal shah
    How to check that my params['Filedata'] is corrupted or not? I have function it's reading file from params['Filedata'] and writing it to the other file. File.open(upload_file, "wb") { |f| f.write(params['Filedata'].read) } this line working fine for me.. But when i am calling this function with delayed job funtion send_later than I am getting error with params['Filedata'].read.

    Read the article

  • An important question on iPhone file writing

    - by Kyle
    I use the NSHomeDirectory() function to get the app's home folder, and write to the Documents directory within that. I'm curious, though, what happens when the user downloads an update for the app in the appstore? Will it all be deleted? When I delete the app on the device, then reinstall it, its wiped out. So, I'm curious to know what will happen with an update. I can't find this in the documentation at all. Thanks alot for reading. I really tried to find this asked somewhere else first, but couldn't. Hopefully this page will be informative to guys like me who are confused on the subject.

    Read the article

  • Mercurial/.hgignore - How do I ignore everything but the contents of a folder?

    - by Beibin
    I have a NetBeans project and the Mercurial repository is in the project root. I would like it to ignore everything except the contents of the "src" and "test" folders, and .hgignore itself. I'm not familiar with regular expressions and can't come up with one that will do that. The ones I tried: (?!src/.*) (?!test/.*) (?!^.hgignore) (?!src/.|test/.|.hgignore) These seem to ignore everything, I can't figure out why. Any advice would be great.

    Read the article

  • Kohana3: Absolute path to a file

    - by Svish
    Say I have a file in my kohana 3 website called assets/somefile.jpg. I can get the url to that file by doing echo Url::site('assets/somefile.jpg'); // /kohana/assets/somefile.jpg Is there a way I can get the absolute path to that file? Like if I want to fopen it or get the size of the file or something like that. In other words, I would like to get something like /var/www/kohana/assets/somefile.jpg or W:\www\kohana\assets\somefile.jpg or whatever is the absolute path.

    Read the article

  • not readable PDF files

    - by Michal_R
    Hello, I am writing Master's thesis - NLP system. I have one component - extractor. It is extracting a plain text from PDF files. There are a few PDF files that can not be extracted correctly. Extractor (PDFBox library) returns a string like this: "¦xDn¦if|d+gDF"Ti&cD+lh d FÁhis~n +xd f«"d¦ffih »h" or "10a61a91a22a25a3a27a17a23a20a8a13a14a61a25a17" I was checking each file that makes this extraction's problem and all these files' text also can not be copy-pasted from PDF Reader (Adobe Reader and FoxIt reader). Viewing them in this readers is enabled, but after selecting its content and copying to the clipboard I get the same wrong text (as described above - strings of not semanticaly correct chars or strings of digits and letters) Could anybody help me??? THX :)

    Read the article

  • How do I start WebDevServer from a .sln file without opening Visual Studio 2008

    - by -providerscriptmaster
    Is there a way to start WebDevServer (Visual Web Development Server) by passing in the .sln file without actually opening Visual Studio 2008? I am a JavaScript developer and I work in a client project and I want to save the memory overhead consumed by VS and give it to multiple browsers for cross-browser testing. I am hesitant with setting up IIS (Visual Web Dev server is SO LIGHT-WEIGHT being Cassini). Please advice. Thanks!

    Read the article

  • how to download a file from remote server usingh asp.net

    - by ush
    The below code works fine for downloading a file from a current pc.plz suggest me how to download it from remote server using ip address or any method protected void Button1_Click(object sender, EventArgs e) { const string fName = @"C:\ITFSPDFbills\February\AA.pdf"; FileInfo fi = new FileInfo(fName); long sz = fi.Length; Response.ClearContent(); Response.ContentType = MimeType(Path.GetExtension(fName)); Response.AddHeader("Content-Disposition", string.Format("attachment; filename = {0}", System.IO.Path.GetFileName(fName))); Response.AddHeader("Content-Length", sz.ToString("F0")); Response.TransmitFile(fName); Response.End(); } public static string MimeType(string Extension) { string mime = "application/octetstream"; if (string.IsNullOrEmpty(Extension)) return mime; string ext = Extension.ToLower(); Microsoft.Win32.RegistryKey rk = Microsoft.Win32.Registry.ClassesRoot.OpenSubKey(ext); if (rk != null && rk.GetValue("Content Type") != null) mime = rk.GetValue("Content Type").ToString(); return mime; }

    Read the article

  • Srping, autowire @Value from a database

    - by Guido
    I am using a properties File to store some configuration properties, that are accessed this way: @Value("#{configuration.path_file}") private String pathFile; Is it possible (with Spring 3) to use the same @Value annotation, but loading the properties from a database instead of a file ?

    Read the article

  • Good way to find duplicate files?

    - by OverTheRainbow
    Hello I don't know enough about VB.Net (2008, Express Edition) yet, so I wanted to ask if there were a better way to find files with different names but the same contents, ie. duplicates. In the following code, I use GetFiles() to retrieve all the files in a given directory, and for each file, use MD5 to hash its contents, check if this value already lives in a dictionary: If yes, it's a duplicate and I'll delete it; If not, I add this filename/hashvalue into the dictionary for later: 'Get all files from directory Dim currfile As String For Each currfile In Directory.GetFiles("C:\MyFiles\", "File.*") 'Check if hashing already found as value, ie. duplicate If StoreItem.ContainsValue(ReadFileMD5(currfile)) Then 'Delete duplicate 'This hashing not yet found in dictionary -> add it Else StoreItem.Add(currfile, ReadFileMD5(currfile)) End If Next Is this a good way to solve the issue of finding duplicates, or is there a better way I should know about? Thank you.

    Read the article

  • Directory Search

    - by xarzu
    This is a simple question and I am sure you C Sharp Pros know it. If you want to grab the contents of a directory on a hard drive (local or otherwise) in a C Sharp program, how do you go about it?

    Read the article

  • Transfer files using java

    - by markovuksanovic
    I need to transfer lots of small files to a remote computer within my java program. I was wondering if somebody could suggest the best way to do so... I need to transfer lots of small files and it has to be really fast. Should I use some existing protocol implementation? maybe ftp? One important thing is that most files would be the same all the time, or the difference would be minor so I was thinking of using git for that purpose. Does anyone have experience with sth like this?

    Read the article

  • Basic refactoring features (e.g., Rename) unavailable when editing code in an aspx/ascx files

    - by DanM
    I was just editing some C# code between <% %> tags in an .ascx file, and I noticed that the Refactor contextual menu is unavailable. And even if I manually add items from this menu to a custom toolbar, they are disabled when viewing aspx/ascx files. I usually only have small snippets of C# code in my aspx/ascx files, but it would still be nice to be able to perform refactoring operations on any code that exists between <% %> tags. I feel like I'm going back to the dark ages when I have to use find/replace to change the name of a variable. Questions Is there a way to enable Visual Studio's refactoring features while viewing aspx/ascx files in Visual Studio? Are there any Visual Studio plug-ins (preferably free) that offer this kind of functionality?

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • How to read the Web.Config file in a Custom Activity Designer in a WF4 Workflow Service

    - by Preet Sangha
    I have a WF service with a custom activity and a custom designer (WPF). I want to add a validation that will check for the presence of some value in the web.config file. At runtime I can overload void CacheMetadata(ActivityMetadata metadata) and thus I can do the validation happily there using System.Configuration.ConfigurationManager to read the config file. Since I also want to do this at design time, I was looking for a way to do this in the designer.

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >