Search Results

Search found 6211 results on 249 pages for 'svn export'.

Page 137/249 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • What should be the "trunk" development, or release

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • How do I determine if a terminal is color-capable?

    - by asjo
    I would like to change a program to automatically detect whether a terminal is color-capable or not, so when I run said program from within a non-color capable terminal (say M-x shell in (X)Emacs), color is automatically turned off. I don't want to hardcode the program to detect TERM={emacs,dumb}. I am thinking that termcap/terminfo should be able to help with this, but so far I've only managed to cobble together this (n)curses-using snippet of code, which fails badly when it can't find the terminal: #include <stdlib.h> #include <curses.h> int main(void) { int colors=0; initscr(); start_color(); colors=has_colors() ? 1 : 0; endwin(); printf(colors ? "YES\n" : "NO\n"); exit(0); } I.e. I get this: $ gcc -Wall -lncurses -o hep hep.c $ echo $TERM xterm $ ./hep YES $ export TERM=dumb $ ./hep NO $ export TERM=emacs $ ./hep Error opening terminal: emacs. $ which is... suboptimal.

    Read the article

  • What IDE setup and workflow is used for OSGi development?

    - by Falx
    I made quite a few easy OSGi test projects in Eclipse RCP. My typical workflow would always be: Make 3 different projects: APIproject, Clientproject and Serverproject Edit the MANIFEST.MF of APIproject to export the api package Edit the MANIFEST.MF file of Clientproject and Serverproject to add the required API package Choose "Run as..." "Plugin Framework" OSGi console starts in eclipse and everything seems to work I also tried wiring things by using Declarative Services, which worked well like this too. Now recently I wanted to try out iPOJO. The problem is that I get the feeling that I've been doing my OSGi development the wrong way. Can it be that I should instead make 1 project en make it work like no OSGi is involved. And then afterwards, just export each package to its own bundle by means of (for instance) the BNDL tool? Should development be done in a normal Eclipse (java, not RCP) or any other java IDE for that matter? So that's why I have these questions: What IDE setup is normally used to develop OSGi with iPOJO? And what is the normal workflow to be used when developing OSGi projects (maybe with iPOJO)?

    Read the article

  • Call to undefined function 'Encrypt' - Attempting to Link OMF Lib

    - by Changeling
    I created a DLL using Visual Studio 2005 VC++ and marked a function for export (for testing). I then took the .LIB file created, and ran it through the COFF2OMF converter program bundled with Borland C++ Builder 5 and it returns the following: C:\>coff2omf -v -lib:ms MACEncryption.lib MACEncryption2.lib COFF to OMF Converter Version 1.0.0.74 Copyright (c) 1999, 2000 Inprise Corporat ion Internal name Imported name ------------- ------------- ??0CMACEncryptionApp@@QAE@XZ ?Decrypt@CMACEncryptionApp@@QAEXXZ Encrypt Encrypt@0 I added the MACEncryption2.lib file to my C++ Builder 5 Project by going to Project-Add to Project.. and selecting the library. The application links, but it cannot find the Encrypt function that I am declaring for export as follows in the VC++ code: extern "C" __declspec(dllexport) BSTR* __stdcall Encrypt() { CoInitialize(NULL); EncryptionManager::_EncryptionManagerPtr pDotNetCOMPtr; HRESULT hRes = pDotNetCOMPtr.CreateInstance(EncryptionManager::CLSID_EncryptionManager); if (hRes == S_OK) { BSTR* str = new BSTR; BSTR filePath = (BSTR)"C:\\ICVER001.REQ"; BSTR encrypt = (BSTR)"\"test"; pDotNetCOMPtr->EncryptThirdPartyMessage(filePath, encrypt, str); return str; } return NULL; CoUninitialize (); } C++ Builder Code: __fastcall TForm1::TForm1(TComponent* Owner) : TForm(Owner) { Encrypt(); } (Yes I know I am encapsulating another DLL.. I am doing this for a reason since Borland can't 'see' the .NET DLL definitions) Can anyone tell me what I am doing wrong so I can figure out why Builder cannot find the function Encrypt() ?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Git as mercurial client? Why no git-hg?

    - by aapeli
    This is a question that's been bothering me for a while. I've done my homework and checked stackoverflow and found at least these two topics about my question: Git for Mercurial like git-svn and Git interoperability with a Mercurial repository I've done some serious googling to solve this issue, but so far with no luck. I've also read the Git Internals book, and the Mercurial Definitive Behind the Scenes to try to figure this out. I'm still a bit puzzled why I haven't been able to find any suitable git-hg type of a tool. From my perspective hg-svn is one of the main features, why I've chosen to use git over mercurial also at work. It allows me to use a workflow I like, and nobody else needs to bother, if they don't care. I just don't see the point in using the intermediate hg repo to convert back and forth, as suggested in one of the chains. So anyway, from what I've read hg and git seem very similar in conceptual design. There are differences under the hood, but none of those should prevent creating a git client for hg. As it seems to me, remote tracking branches and octopus merges make git even more powerful than hg is. So, the real question, is there any real reason why git-hg does not exist (or at least is very hard to find)? Is there some animosity from git users (and developers) towards their hg counterparts that has caused the lack of the git-hg tool? Do any of you have any plans to develop something like this, and go public with it? I could volunteer (although with very feeble C-skills) to participate to get this done. I just don't possess the full knowledge to start this up myself. Could this be the tool to end all DVCS wars for good?

    Read the article

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • how to parametrize an import in a View?

    - by Gianluca Colucci
    Hello everybody, I am looking for some help and I hope that some good soul out there will be able to give me a hint :) In the app I am building, I have a View. When an instance of this View gets created, it "imports" (MEF) the corresponding ViewModel. Here some code: public partial class ContractEditorView : Window { public ContractEditorView () { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import(ViewModelTypes.ContractEditorViewModel)] public object ViewModel { set { DataContext = value; } } } and here is the export for the ViewModel: [PartCreationPolicy(CreationPolicy.NonShared)] [Export(ViewModelTypes.ContractEditorViewModel)] public class ContractEditorViewModel: ViewModelBase { public ContractEditorViewModel() { _contract = new Models.Contract(); } } Now, this works if I want to open a new window to create a new contract... But it does not if I want to use the same window to edit an existing contract... In other words, I would like to add a second construct both in the View and in the ViewModel: the original constructor would accept no parameters and therefore it would create a new contract; the new construct - the one I'd like to add - I'd like to pass an ID so that I can load a given entity... What I don't understand is how to pass this ID to the ViewModel import... Thanks in advance, Cheers, Gianluca.

    Read the article

  • super light software development process

    - by Walty
    hi, For the development process I have involved so far, most have teams of SINGLE member, or occasionally two. We used python + django for the major development, the development process is actually very fast, and we do have code reviews, design pattern discussions, and constant refactoring. Though team size is small, I do think there are some development processes / best practices that could be enforced. For example, using svn would be definitely better than regular copy backup. I did read some articles & books about Agile, XP & continuous integration, I think they are nice, but still too heavy for this case (team of 1 or 2, and fast coding). For example, IMHO, with nice design pattern, and iterative development + refactoring, the TDD MIGHT be an overkill, or at least the overhead does not out-weight the advantages. And so is the pair programming. The automated testing is a nice idea, but it seems not technically feasible for every project. our current practices are: svn + milestone + code review I wonder if there are development processes / best practices specifically targeted on such super light teams? thanks.

    Read the article

  • Importing from referenced assembly - MEF

    - by cmaduro
    I have the following simplified code: namespace Silverbits.Applications { public partial class SilverbitsApplication : Application { [Import("MainPage")] public UserControl MainPage { get { return RootVisual as UserControl; } set { RootVisual = value; } } public SilverbitsApplication() { this.Startup += this.SilverbitsApplication_StartUp; this.Exit += new EventHandler(SilverbitsApplication_Exit); this.UnhandledException += this.SilverbitsApplication_UnhandledException; InitializeComponent(); } private void SilverbitsApplication_StartUp(object sender, StartupEventArgs e) { CompositionInitializer.SatisfyImports(this); } } namespace Manpower4U { public class App : SilverbitsApplication { public App() : base() { } } } namespace Manpower4U { [Export("MainPage")] public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } } } The idea is that I have a Silverbits Library which is a completely different solution. And I have Manpower4U silverlight application that references my Silverbits library. I want to export MainPage from Manpower4U and set it to the RootVisual in my SilverbitsApplication class. SilverbitsApplication class is basically App.xaml/App.cs from the silverlight application, only I put it in a class library and subclassed App.cs file in Manpower4U, which is now the entry point of Manpower4U. MEF cannot resolve the import. How do I get this to work?

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • Access Flash Symbol Via Code from an .as Class

    - by David
    Hi, I've got a movie clip symbol in my .fla file that I need to reference in an .as file which is in a subfolder in the project. I select the symbol in the library and edit its properties. Its name is bubble; I Export for ActionScript and Export in frame 1. I give it a class name of Bubble. Now I need to go to my .as class, called SomeClass.as There, I need to reference the symbol, because I want to move it from within related code in that .as file. But if I try bubble.x I get an error. If I try var myBubble:Bubble = new Bubble(); I get 'Access of undefined property myBubble'. I was told that I might try 'importing the document class' but how do you import a class which is in the root directory of your app from within a class that's in a subfolder? (Don't know if this would provide the solution anyway)... Tanks.

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • Will this class cause memory leaks, and does it need a dispose method? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting frequent crashes of the app pool when it is in use. Thanks.

    Read the article

  • mercurial fails with a file named ---.config - any way around this?

    - by Travis Laborde
    We are just beginning to learn and evaluate Mercurial, due to an increasing number of nightmare merges, and various other problems we've had with SVN lately. A client wants us to pull down a live copy of their site, do some SEO work on it, and push it back to them. They have no source control at all. I figure this is a great project to work on with Mercurial. Instead of putting it into our SVN and exporting when we are done, we'll use Mercurial... But right away it seems I have some problem :) They have a file called "---.config" (without quotes) which seems to cause our Mercurial to barf. It just can't commit that file. I've created the repo and committed everything else, but I just can't get this one file committed. We are running on Windows 2008 x64 with TortoiseHG 1.0. I suppose I could ignore the file since it is unlikely we'll need to work with it, but still - I'd like to learn how to use Mercurial a bit better. Is there a way around this?

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

  • Understanding the workflow of the messages in a generic server implementation in Erlang

    - by Chiron
    The following code is from "Programming Erlang, 2nd Edition". It is an example of how to implement a generic server in Erlang. -module(server1). -export([start/2, rpc/2]). start(Name, Mod) -> register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)). rpc(Name, Request) -> Name ! {self(), Request}, receive {Name, Response} -> Response end. loop(Name, Mod, State) -> receive {From, Request} -> {Response, State1} = Mod:handle(Request, State), From ! {Name, Response}, loop(Name, Mod, State1) end. -module(name_server). -export([init/0, add/2, find/1, handle/2]). -import(server1, [rpc/2]). %% client routines add(Name, Place) -> rpc(name_server, {add, Name, Place}). find(Name) -> rpc(name_server, {find, Name}). %% callback routines init() -> dict:new(). handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}. server1:start(name_server, name_server). name_server:add(joe, "at home"). name_server:find(joe). I tried so hard to understand the workflow of the messages. Would you please help me to understand the workflow of this server implementation during the executing of the functions: server1:start, name_server:add and name_server:find?

    Read the article

  • How to implement a download for dynamic files in asp.net with masterpages

    - by Tim
    Hello, the title says it all. I have seen some similar questions on SO like this or this, but either i have overlooked something or my requirement is different, neither works. My situation is following: i have a Masterpage one of its contentpage is called MasterData.aspx MasterData has an asp.net ajax tabcontainer control with one usercontrol in every tabpanel these usercontrols(f.e. MD_Customer.ascx)hold the main content(like a normal page) they all have GridViews in it and i want to provide an Excel-Export-Button What i've tried is is to use an iframe like here. But the function that adds the iframe to the document gets never called and therefore i never see the save-as-dialog. Maybe this is caused by using a MasterPage. Does somebody has an idea on how to provide a button in an UpdatePanel that causes an async postback, so that i can generate a CSV dynamically in codebehind and write it to the response? Thank you in advance. aspx-markup: <asp:UpdatePanel ID="UpdGridInfo" runat="server" > <ContentTemplate> <asp:Label ID="LblInfo" Font-Underline="false" runat="server" CssClass="content" ></asp:Label>&nbsp;&nbsp; <asp:ImageButton ToolTip="export to Excel" style="vertical-align:bottom" ID="BtnExcelExport" ImageUrl="~/images/excel2007logo.png" runat="server" /> </ContentTemplate> </asp:UpdatePanel> and the BtnExportExcel codebehind handler(of course it cannot work to write the csv to the response of this page): Private Sub BtnExcelExport_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles BtnExcelExport.Click Dim csv As String = tableToCsv(DirectCast(Me.GridSource, DataTable)) Response.AddHeader("Content-disposition", "attachment; filename=RuleConfigurationFile.csv") Response.ContentType = "application/octet-stream" Response.Write(csv) Response.End() End Sub

    Read the article

  • Looking to reimplement build toolchain from bash/grep/sed/awk/(auto)make/configure to something more

    - by wash
    I currently maintain a few boxes that house a loosely related cornucopia of coding projects, databases and repositories (ranging from a homebrew *nix distro to my class notes), maintained by myself and a few equally pasty-skinned nerdy friends (all of said cornucopia is stored in SVN). The vast majority of our code is in C/C++/assembly (a few utilities are in python/perl/php, we're not big java fans), compiled in gcc. Our build toolchain typically consists of a hodgepodge of make, bash, grep, sed and awk. Recent discovery of a Makefile nearly as long as the program it builds (as well as everyone's general anxiety with my cryptic sed and awking) has motivated me to seek a less painful build system. Currently, the strongest candidate I've come across is Boost Build/Bjam as a replacement for GNU make and python as a replacement for our build-related bash scripts. Are there any other C/C++/asm build systems out there worth looking into? I've browsed through a number of make alternatives, but I haven't found any that are developed by names I know aside from Boost's. (I should note that an ability to easily extract information from svn commandline tools such as svnversion is important, as well as enough flexibility to configure for builds of asm projects as easily as c/c++ projects)

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >