Search Results

Search found 13068 results on 523 pages for 'copy paste'.

Page 439/523 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • VS2008 DataSet Wizard doesn't match tables for updating

    - by James H
    Hi all, first question ever on this site. I've been having a real stubborn problem using Visual Studio 2008 and I'm hoping someone has figured this out before. I have 2 libraries and 1 project that use strongly typed datasets (MSSQL backend) that I generated using the "Configure DataSet with Wizard" option on in Data Sources. I've had them working just fine for awhile and I've written a lot of code in the non-designer file for the row classes. I've also specified a lot of custom queries using the dataset designer. This is all work I can't afford to loose. I've recently made some changes to re-organize my libraries which included changing the names of the libraries themselves. I also changed the connection string to point to a different database which is a development copy (same exact schema). Problem is now when I open up "Configure DataSet with Wizard" to pickup a new column I've added to one of the tables it no longer matches the tables correctly in the wizard. The wizard displays all of the tables in the database and none of them have check boxes next to them (ie: are not part of this dataset). Below those it shows all of the tables again but with red Xs and these are checked. Basically meaning that Visual Studio sees all of the tables it currently has in the DataSet and sees all of the tables in the database, but believes they are no longer the same and thus do not match! I've had this same thing happen quite awhile back and I think I just re-built the xsd from scratch and manually copied the code over and then had to redefine all of the custom queries I built in the dataset designer. That's not a good solution. I'm looking for 2 answers: 1. What causes this to happen and how to prevent it. 2. How do I fix this so that the wizard once again believes the tables in its xsd are the same tables that are in the database (yes, they have the exact same names still). Thanks.

    Read the article

  • Web Servicet Client in JBOSS 5.1 with JDK6

    - by dcp
    This is a continuation of the question here: http://stackoverflow.com/questions/2435286/jboss-does-app-have-to-be-compiled-under-same-jdk-as-jboss-is-running-under It's different enough though that it required a new question. I am trying to use jdk6 to run JBOSS 5.1, and I downloaded the JDK6 version of JBOSS 5.1. This works fine and my EAR application deploys fine. However, when I want to run a web service client with code like this: public static void main(String[] args) throws Exception { System.out.println("creating the web service client..."); TestClient client = new TestClient("http://localhost:8080/tc_test_project-tc_test_project/TestBean?wsdl"); Test service = client.getTestPort(); System.out.println("calling service.retrieveAll() using the service client"); List<TestEntity> list = service.retrieveAll(); System.out.println("the number of elements in list retrieved using the client is " + list.size()); } I get the following exception: javax.xml.ws.WebServiceException: java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage at org.jboss.ws.core.jaxws.client.ClientImpl.handleRemoteException(ClientImpl.java:396) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:302) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) Now, here's the really interesting part. If I change the JDK that my the code above is running under from JDK6 to JDK5, the exception above goes away! It's really strange. The only way I found for the code above to run under JDK6 was to take the JBOSS_HOME/lib/endorsed folder and copy it to JDK6_HOME/lib. This seems like it shouldn't be necessary, but it is. Is there any other way to make this work other than using the workaround I just described?

    Read the article

  • How to I display results of phpcs in VIM?

    - by Matt
    I am presently trying to use PHP Codesniffer (PEAR) in vim for PHP Files. I have found 2 sites that give code to add into the $HOME/.vim/plugin/phpcs.vim file. I have added the code and I "think" it is working, but I cannot see the results, I only see one line at the very bottom of vim that says (1 of 32) but I cannot see any of the 32 errors. Here is my .vimrc file " Backup Options -> Some People may not want this... it generates extra files set backup " Enable Backups set backupext=.bak " Add .bak extention to modified files set patchmode=.orig " Copy original file to with .orig extention Before saving. " Set Tabs and spacing for PHP as recomended by PEAR and Zend set expandtab set shiftwidth=4 set softtabstop=4 set tabstop=4 " Set Auto-indent options set cindent set smartindent set autoindent " Show lines that exceed 80 characters match ErrorMsg '\%80v.\+' " Set Colors set background=dark " Show a status bar set ruler set laststatus=2 " Set Search options highlight, and wrap search set hls is set wrapscan " File Type detection filetype on filetype plugin on " Enable Spell Checking set spell " Enable Code Folding set foldenable set foldmethod=syntax " PHP Specific options let php_sql_query=1 " Highlight sql in php strings let php_htmlInStrings=1 " Highlight HTML in php strings let php_noShortTags=1 " Disable PHP Short Tags let php_folding=1 " Enable Ability to FOLD html Code I have tried 2 different versions of phpcs.vim, and I get the same results for both: Version 1 (found at: VIM an a PHP IDE) function! RunPhpcs() let l:filename=@% let l:phpcs_output=system('phpcs --report=csv --standard=YMC '.l:filename) " echo l:phpcs_output let l:phpcs_list=split(l:phpcs_output, "\n") unlet l:phpcs_list[0] cexpr l:phpcs_list cwindow endfunction set errorformat+=\"%f\"\\,%l\\,%c\\,%t%*[a-zA-Z]\\,\"%m\" command! Phpcs execute RunPhpcs() Version 2: (found at Integrated PHP Codesniffer in VIM ) function! RunPhpcs() let l:filename=@% let l:phpcs_output=system('phpcs --report=csv --standard=YMC '.l:filename) let l:phpcs_list=split(l:phpcs_output, "\n") unlet l:phpcs_list[0] cexpr l:phpcs_list cwindow endfunction set errorformat+="%f"\\,%l\\,%c\\,%t%*[a-zA-Z]\\,"%m" command! Phpcs execute RunPhpcs() Both of these produce identical results. phpcs is installed on my system, and I am able to generate results outside of vim. Any help would be appreciated I am just learning more about vim...

    Read the article

  • Dynamically load Jquery into .js page

    - by RussP
    Please excuse me if I'm simple here, I want to create a simple widget that people can access from their websites - e.g by copy/past something like <script language="javascript" src="test2.js"></script> <div id="test"></div> anywhere in their web pages. where is dynamically filled via Jquery and the functions in/on test2.js. I can do it if JQuery is actually "printed" on the page of test2.js, but I cannot get any JQuery functions to work if I try to include/call JQuery dynamically. How do you call JQuery via javascript and then get it to work with the functions on the page? And/Or is there an easy way to add the <div id="test"></div> dynamically aswell? Sure I can body.append etc. but that only adds at the bottom of the page. Is there a way to .append in the position where the script include <script language="javascript" src="test2.js"></script> is actually placed? Hope I make sence.

    Read the article

  • WPF XAML references not resolved via myAssembly.GetReferencedAssemblies()

    - by WPF-it
    I have a WPF container application (with ContentControl host) and a containee application (UserControl). Both are oblivious of each other. Only one XML config file holds the string dllpath of the containee's DLL and full namespace name of the ViewModelClass inside the containee. A generic code in container resolves containee's assembly (Assembly.LoadFrom(dllpath)) and creates the viewmodel's instance using Activator.CreateInstance(vmType). when this viewmodel is hosted inside the ContentControl of the container, and relevant vierwmodel specific ResourceDictionary is added to ContentControl.Resources.MergedDictionaries of the content control of container, so the view loads fine. Now my containee has to host the WPF DataGrid using assembly reference of WPFToolkit.dll from my local C:\Lib folder. The Copy Local reference to the WPFToolkit.dll is added to the .csproj file of the containee's project and its only referred in the UserControl.XAML using its XAML namepsace. This way my bin\debug folder in my containee application, gets the WPFToolkit.dll copied. XAML: xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" <Controls:DataGrid ItemsSource="{Binding AssetList}" ... /> Issue: The moment the ViewModel (i.e. the containee's usercontrol) tries to load itself I get this error. "Cannot find type 'Microsoft.Windows.Controls.DataGrid'. The assembly used when compiling might be different than that used when loading and the type is missing." Hence I tried to load the referenced assemblies of the containee's assembly (myAssembly.GetReferencedAssemblies()) before the viewmodel is hosted. But WPFToolkit isnt there in that list of assemblies! Strange thing is I have another dll referred called Logger.dll in the containee codebase but this one is implemented using C# code behind. So I get its reference correctly resolved in myAssembly.GetReferencedAssemblies(). So does that mean BAML references of assemblies are never resolvable by GetReferencedAssemblies?

    Read the article

  • Javascript: Whitespace Characters being Removed in Chrome (but not Firefox)

    - by Matrym
    Why would the below eliminate the whitespace around matched keyword text when replacing it with an anchor link? Note, this error only occurs in Chrome, and not firefox. For complete context, the file is located at: http://seox.org/lbp/lb-core.js To view the code in action (no errors found yet), the demo page is at http://seox.org/test.html. Copy/Pasting the first paragraph into a rich text editor (ie: dreamweaver, or gmail with rich text editor turned on) will reveal the problem, with words bunched together. Pasting it into a plain text editor will not. // Find page text (not in links) -> doxdesk.com function findPlainTextExceptInLinks(element, substring, callback) { for (var childi= element.childNodes.length; childi-->0;) { var child= element.childNodes[childi]; if (child.nodeType===1) { if (child.tagName.toLowerCase()!=='a') findPlainTextExceptInLinks(child, substring, callback); } else if (child.nodeType===3) { var index= child.data.length; while (true) { index= child.data.lastIndexOf(substring, index); if (index===-1 || limit.indexOf(substring.toLowerCase()) !== -1) break; // don't match an alphanumeric char var dontMatch =/\w/; if(child.nodeValue.charAt(index - 1).match(dontMatch) || child.nodeValue.charAt(index+keyword.length).match(dontMatch)) break; // alert(child.nodeValue.charAt(index+keyword.length + 1)); callback.call(window, child, index) } } } } // Linkup function, call with various type cases (below) function linkup(node, index) { node.splitText(index+keyword.length); var a= document.createElement('a'); a.href= linkUrl; a.appendChild(node.splitText(index)); node.parentNode.insertBefore(a, node.nextSibling); limit.push(keyword.toLowerCase()); // Add the keyword to memory urlMemory.push(linkUrl); // Add the url to memory } // lower case (already applied) findPlainTextExceptInLinks(lbp.vrs.holder, keyword, linkup); Thanks in advance for your help. I'm nearly ready to launch the script, and will gladly comment in kudos to you for your assistance.

    Read the article

  • Connection reset when calling disconnect() using enterprisedt's ftp java framework

    - by Frederik Wordenskjold
    I'm having trouble disconnecting from a ftp-server, using the enterprisedt java ftp framework. I can simply not call disconnect() on a FileTransferClient object without getting an error. I do not do anything, besides connecting to the server, and then disconnecting: // create client log.info("Creating FTP client"); ftp = new FileTransferClient(); // set remote host log.info("Setting remote host"); ftp.setRemoteHost(host); ftp.setUserName(username); ftp.setPassword(password); // connect to the server log.info("Connecting to server " + host); ftp.connect(); log.info("Connected and logged in to server " + host); // Shut down client log.info("Quitting client"); ftp.disconnect(); log.info("Example complete"); When running this, the log reads: INFO [test] 28 maj 2010 16:57:20.216 : Creating FTP client INFO [test] 28 maj 2010 16:57:20.263 : Setting remote host INFO [test] 28 maj 2010 16:57:20.263 : Connecting to server x INFO [test] 28 maj 2010 16:57:20.979 : Connected and logged in to server x INFO [test] 28 maj 2010 16:57:20.979 : Quitting client ERROR [FTPControlSocket] 28 maj 2010 16:57:21.026 : Read failed ('' read so far) And the stacktrace: com.enterprisedt.net.ftp.ControlChannelIOException: Connection reset at com.enterprisedt.net.ftp.FTPControlSocket.readLine(FTPControlSocket.java:1029) at com.enterprisedt.net.ftp.FTPControlSocket.readReply(FTPControlSocket.java:1089) at com.enterprisedt.net.ftp.FTPControlSocket.sendCommand(FTPControlSocket.java:988) at com.enterprisedt.net.ftp.FTPClient.quit(FTPClient.java:4044) at com.enterprisedt.net.ftp.FileTransferClient.disconnect(FileTransferClient.java:1034) at test.main(test.java:46) It should be noted, that I without problems can connect, and do stuff with the server, like getting a list of files in the current working directory. But I cant, for some reason, disconnect! I've tried using both active and passive mode. The above example is by the way copy/pasted from their own example. I cannot fint ANYTHING related to this by doing a Google-search, so I was hoping you have any suggestions, or experience with this issue.

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • Essence of BiMap in Google collections

    - by littleEinstein
    I am still quite puzzled at the BiMap in Google collections/Guava. It was claimed that The two bimaps are backed by the same data; any changes to one will appear in the other. I browsed through the source code, and I found the use of delegate in ForwardingMap. But in any actually subclass of StandardBiMap, I do see the data are put into both the forward and reverse map. So what is the essence, and why it claims to have saved space by keeping only one copy of the data? Is it just the actual objects are one set, but two distinct sets of references to these objects are still needed, one set maintained in forward map, and the other in reverse map? private V putInBothMaps(K key, V value, boolean force) { boolean containedKey = containsKey(key); if (containedKey && Objects.equal(value, get(key))) { return value; } if (force) { inverse().remove(value); } else if (containsValue(value)) { throw new IllegalArgumentException( "value already present: " + value); } V oldValue = super.put(key, value); updateInverseMap(key, containedKey, oldValue, value); return oldValue; }

    Read the article

  • Programmatically modifying a file on Windows Server 2008 (Web Ed.)

    - by Tom
    I have written a .NET 2008 application, incorporating Microsoft.Office.Interop.Excel, that modifies an existing Excel 2007 spreadsheet. It works perfectly on my WinXP development computer. When I upload the app to a Microsoft Web Server 2008, it opens the file and reads from the file, but when the app tries to save the file, it throws this exception: "System.Runtime.InteropServices.COMException (0x800A03EC): 'july2009.xlsx' is read-only. To save a copy, click OK, then give the workbook a new name in the Save As dialog box." The file is NOT read-only, nor is it opened by any other user or app. The app and the Excel file both reside on the D: (data-only) drive. My first instinct was to look at file permissions. When nothing else worked, I literally created a temporary Group, added EVERY user and security entity to it and granted the group full control of the entire D: drive. No luck. Then I tried manually elevating the permission by running my app as administrator. No luck. Finally, I copied the file to my XP development computer and ran the app there. Of course it worked perfectly. Can anyone please tell me how to give my program permission to edit a file on Server 2008? Thanks!

    Read the article

  • Why does the .NET tab in the 'Add Reference' dialog in Visual Studio not list the contents of the GA

    - by abroun
    Duplicate of: Getting assemblies to show in the .NET tab of Add Reference So, I'm using Visual C# 2008 Express Edition and I've just been on a bit of a detour as I found out that my assumption that the .NET tab of the 'Add Reference' dialog lists the contents of the GAC was incorrect. This was a bit of a problem for me as the assembly that I wanted to reference from my project was only available in the GAC. (It was Microsoft.XNA.Framework v2.0 obtained from the XNA 2.0 redistributalbe and as far as I could see it installed only into the GAC). I worked round the problem by setting the reference to Microsoft.XNA.Framework manually in the .csproj file and then getting a copy of the dll out of the cache. I was then able to create a directory for the DLL, add it to Visual Studio's list of assembly directories in the registry and then voila! I could see it in the .NET tab. This all seems like a bit of a faff to me and I don't think that my initial assumption (that the .NET tab shows the contents of the GAC) was that unreasonable or would be that uncommon. Can someone who knows more than me tell me why the contents of the GAC aren't shown? The documentation just says that they are not, but is there a good reason? is there actually a way to get the entire contents of the GAC to be listed? A tick box I've missed somewhere? Any info much appreciated.

    Read the article

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Python: OSX Library for fast full screen jpg/png display

    - by Parand
    Frustrated by lack of a simple ACDSee equivalent for OS X, I'm looking to hack one up for myself. I'm looking for a gui library that accommodates: Full screen image display High quality image fit-to-screen (for display) Low memory usage Fast display Reasonable learning curve (the simpler the better) Looks like there are several choices, so which is the best? Here are some I've run across: PyOpenGL PyGame PyQT wxpython I don't have any particular experience with any of these, nor any strong desire to become an expert - I'm looking for the simplest solution. What do you recommend? [Update] For those not familiar with ACDSee, here's what it does that I care about: Simple list/thubmnail display of images in a directory Sort by name/size/type Ability to view images full screen Single-key delete while viewing full screen Move to next/previous image while viewing full screen Ability to select a group of images for: move to / copy to directory delete resize ACDSee has a bunch of niceties as well, such as remembering directories you've moved images to in the past, remembering your resize settings, displaying the total size of the images you've selected, etc. I've tried most of the options I could find (including Xee) and none of them quite get there. Please keep in mind that this is a programming/library question, not a criticism of any of the existing tools.

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Maven String Replace of Text Web Resources

    - by Jaco van Niekerk
    I have a Maven web application with text files in src/main/webapp/textfilesdir As I understand it, during the package phase this textfilesdir directory will be copied into the target/project-1.0-SNAPSHOT directory, which is then zipped up into a target/project-1.0-SNAPSHOT.war Problem Now, I need to do a string replacement on the contents of the text files in target/project-1.0-SNAPSHOT/textfilesdir. This must then be done after the textfilesdir is copied into target/project-1.0-SNAPSHOT, but prior to the target/project-1.0-SNAPSHOT.war file being created. I believe this is all done during the package phase. How can a plugin (potentially maven-antrun-plugin), plug into the package phase to do this. The text files don't contain properties, like ${property-name} to filter on. String replacement is likely the only option. Options Modify the text files after the copy into target/project-1.0-SNAPSHOT directory, yet prior to the WAR creation. After packaging, extract the text files from WAR, modify them, and add them back into the WAR. I'm thinking there is another option here I'm missing. Thoughts anyone?

    Read the article

  • Is there a Standard or Best Practice for Perl Progams, as opposed to Perl Modules?

    - by swestrup
    I've written any number of perl modules in the past, and more than a few stand-alone perl programs, but I've never released a multi-file perl program into the wild before. I have a perl program that is almost at the beta stage and is going to be released open source. It requires a number of data files, as well as some external perl modules -- some I've written myself, and some from CPAN -- that I'll have to bundle with it so as to ensure that someone can just download my program and install it without worrying about hunting for obscure modules. So, it sounds to me like I need to write an installer to copy all the files to standard locations so that a user can easily install everything. The trouble is, I have no idea what the standard practice would be for this. I have found lots of tutorials on perl module standards, but none on perl program standards. Does anyone have any pointers to standard paths, installation proceedures, etc, for perl programs? This is going to be complicated by the fact that the program is multi-platform. I've been testing it in Linux, but its designed to work equally well in Windows.

    Read the article

  • gdb reverse debugging error

    - by Werner
    Hi, i started to try reverse debugging with gdb 7, followin the tutorial: http://www.sourceware.org/gdb/wiki/ProcessRecord/Tutorial and I thought, great! Then I started to debug a real program which gives an error at the end. So I run it with gdb, and I put a breakpoint just before the place I think the error appears. Then I type "record" in order to start to recrd actions for future reverse-debugging. But after some steps I get Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGTRAP, Trace/breakpoint trap. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 (gdb) n Single stepping until exit from function memcpy, which has no line number information. Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGABRT, Aborted. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 Before I look at in in detail, I wonder if this feature is still buggy, or if I should start to record from the beginning. Where this "record" error happens, just an object is created as a copy of other: Thanks

    Read the article

  • CSS does not load when updated in an ASP.NET MVC 2 website

    - by dannie.f
    I have a weird problem. Whenever the website designer updates the css file for the website I am working on. I overwrite the old css file with the new version and copy any new images. The problem is that whenever I do this the images in the website header no longer loads along with some other styles that depend on css loading some images. I have tried everything, clearing the browser cache, deleting asp.net temporary files, restarting the browser, changing browsers and pressing Ctrl + F5. Still the images don't load. What usually happens also is that the problem would eventually correct itself and I wouldn't know why. This driving me crazy. Does anyone else have the problem and know how to fix it? If this helps, the header is located in a partial view and the master page loads the css file using Url.Content. The images are css sprites. This issue persists no matter which browser I try, Chrome, Firefox or IE. I am using Visual Studio 2008 SP1 on Windows 7 Ultimate. Other team members have experienced this issue.

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • How do I best remove the unicode characters that XHTML regards as non-valid using php?

    - by Andrew Stacey
    I run a forum designed to support an international mathematics group. I've recently switched it to unicode for better support of international characters. In debugging this conversion, I've discovered that not all unicode characters are considered as valid XHTML (the relevant website appears to be http://www.w3.org/TR/unicode-xml/). One of the steps that the forum software goes through before presenting the posts to the browser is an XHTML validation/sanitisation step. It seems a reasonable idea that at that stage it should remove any unicode characters that XHTML doesn't like. So my question is: Is there a standard (or best) way of doing this in PHP? (The forum is written in PHP, by the way.) I guess that the failsafe would be a simple str_replace (if that's also the best, do I need to do anything extra to make sure it works properly with unicode?) but that would involve me having to go through the XHTML DTD (or the above-referenced W3 page) carefully to figure out what characters to list in the search part of str_replace, so if this is the best way, has someone already done that so that I can steal, err, copy, it? (Incidentally, the character that caused the problem was U+000C, the 'formfeed', which (according to the W3 page) is valid HTML but invalid XHTML!)

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >