Search Results

Search found 35094 results on 1404 pages for 'post build'.

Page 121/1404 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Ant Tokenizer: Selecting an individual Token

    - by John Oxley
    I have the following ant task: <loadfile property="proj.version" srcfile="build.py"> <filterchain> <striplinecomments> <comment value="#"/> </striplinecomments> <linecontains> <contains value="Version" /> </linecontains> </filterchain> </loadfile> <echo message="${proj.version}" /> And the output is [echo] config ["Version"] = "v1.0.10-r4.2" How do I then use a tokenizer to get only v1.0.10-r4.2, the equivalent of | cut -d'"' -f4

    Read the article

  • Qt compilation and stylesheet

    - by Yosko
    Each time I compile my Qt project after modifying my qss stylesheet file, the modifications aren't taken into account, unless I rebuild everything. Any idea on a workaround for this, so that I don't have to wait 5 minutes each time I change my qss ? Notes: I use Qt 4.8, and my stylsheet is declared in a resource file (qrc). EDIT: As suggested by Luca Carlon, when a qss is reference in the project through a .qrc file, the changes in the qss don't affect the qrc, and the compiler ignores it. To avoid that, I added a Custom Build Step to my project: before the qmake step! calls a .bat file without any argument the .bat contains the real command copy /b files.qrc +,,

    Read the article

  • Visual Studio namespace errors after deleting userControls

    - by msfanboy
    Really Visual Studio can be so annoying sometimes... I did nothing else than deleting 3 UserControls in a folder. Since that time I get a error message I do not get rid of. Whatever I do I can not build successfully my project. I did not touch the SchoolAdministrationUC.xaml file , but I deleted 3 other UserControls also located in the path: TBM\View\SchoolclassAdministration\ Error message from VS: Error 1 The type or namespacename "SchoolclassAdministration" is in namespace "TBM.View" not available. (missing assembly reference?) E:\TBM\obj\x86\Debug\View\SchoolclassAdministration\SchoolAdministrationUC.g.cs 33 16 TBM How do I get rid of error ?

    Read the article

  • WPF application in obj directory doesn't work.

    - by juharr
    When I build my WPF application the exe that ends up in the bin directory works just fine, but the one in the obj directory does not. When I Debug the exe from the obj directory I get the following exception: TypeInitializationException was unhandled: The type initializer for 'MyProject.App' threw an exception. So basically I'm wondering why the obj exe doesn't work while the bin one does (I was under the assumption that the obj exe was just copied to the bin) and how to fix it. The reason that I even care is because I'm using Wix to create a MSI for my application and I have a Votive project setup that uses var.MyProject.TargetPath which points to the exe in the obj directory.

    Read the article

  • How to program three editions Light, Pro, Ultimate in one solution

    - by Henry99
    I'd like to know how best to program three different editions of my C# ASP.NET 3.5 application in VS2008 Professional (which includes a web deployment project). I have a Light, Pro and Ultimate edition (or version) of my application. At the moment I've put all in one solution with three build versions in configuration manager and I use preprocessor directives all over the code (there are around 20 such constructs in some ten thousand lines of code, so it's overseeable): #if light //light code #endif #if pro //pro code #endif //etc... I've read in stackoverflow for hours and thought to encounter how e.g. Microsoft does this with its different Windows editions, but did not find what I expected. Somewhere there is a heavy discussion about if preprocessor directives are evil. What I like with those #if-directives is: the side-by-side code of differences, so I will understand the code for the different editions after six months and the special benefit to NOT give out compiled code of other versions to the customer. OK, long explication, repeated question: What's the best way to go?

    Read the article

  • Building SL4 + RIAServices app takes too long on VS2010.

    - by adlanelm
    Got a Win7 box with VS2010 Premium installed on it. Building desktop apps works just fine. But we got this solution with 15 SL4 and 21 desktop projects... Building the SL part of it takes too long. This is very irritating and encourages to drop TDD since every time I run a test it takes ~3 seconds for msbuild to find out that nothing changed and the project should be skipped. The projects are very small and there's nothing fancy in them and we hadn't any problems before we switched from VS2008+SL3. I've heard people complaining abound VS2010 speed in general, but nothing about SL4 build time. Is anyone experiencing same problems and is there any workaround for this?

    Read the article

  • frequent updates of a Tomcat application

    - by Erel Segal Halevi
    I have an application that runs on a Tomcat 7 server on a Windows machine. In its current stage, I have to frequently update and fix it. Whenever I need to update the application, I do all this: Build a new war file; Go to the Windows server, stop the Tomcat service; download the file, put it under webapps; Remove the old application folder under webapps; Remove the old application folder under work/Catalina/localhost (otherwise it keeps the old version cached). Restart the Tomcat service. I am sure there is a way to do all this automatically. What is it?

    Read the article

  • Caught AttributeError while rendering: 'str' object has no attribute '_meta'

    - by D_D
    def broadcast_display_and_form(request): if request.method == 'POST' : form = PostForm(request.POST) if form.is_valid(): post = form.cleaned_data['post'] obj = form.save(commit=False) obj.person = request.user obj.post = post obj.save() readers = User.objects.all() for x in readers: read_obj = BroadcastReader(person = x) read_obj.post = obj read_obj.save() return HttpResponseRedirect('/broadcast') else : form = PostForm() posts = BroadcastReader.objects.filter(person = request.user) return render_to_response('broadcast/index.html', { 'form' : form , 'posts' : posts ,} ) My template: {% extends "base.html" %} {% load comments %} {% block content %} <form action='.' method='POST'> {{ form.as_p }} <p> <input type="submit" value ="send it" /></input> </p> </form> {% get_comment_count for posts.post as comment_count %} {% render_comment_list for posts.post %} {% for x in posts %} <p> {{ x.post.person }} - {{ x.post.post }} </p> {% endfor %} {% endblock %}

    Read the article

  • Junit Ant Task, output stack trace

    - by Benju
    I have a number of tests failing in the following JUnit Task. <target name="test-main" depends="build.modules" description="Main Integration/Unit tests"> <junit fork="yes" description="Main Integration/Unit Tests" showoutput="true" printsummary="true" outputtoformatters="true"> <classpath refid="test-main.runtime.classpath"/> <batchtest filtertrace="false" todir="${basedir}"> <fileset dir="${basedir}" includes="**/*Test.class" excludes="**/*MapSimulationTest.class"/> </batchtest> </junit> </target> How do I tell Junit to ouput the errors for each test so that I can look at the stack trace and debug the issues.

    Read the article

  • Different versions in manifest on different machines

    - by Terry777
    Hi guys, Have two machines, both with VS2005 SP1 installed and with the WinSXS showing the same things installed. When one machine builds a particular C++ .dll .vcproj it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> in its manifest file. But on the other machine it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50608.0 processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> even though this machine does not have '8.0.50608.0' libraries listed in its WinSXS. The .dll built on this machine with the older version referenced has some problems. I have ensured both machines have the same latest source code and references etc.. What could be causing it to build with the different reference? Thanks! Terry

    Read the article

  • can't save form content to database, help plsss!!

    - by dana
    i'm trying to save 100 caracters form user in a 'microblog' minimal application. my code seems to not have any mystakes, but doesn't work. the mistake is in views.py, i can't save the foreign key to user table models.py looks like this: class NewManager(models.Manager): def create_post(self, post, username): new = self.model(post=post, created_by=username) new.save() return new class New(models.Model): post = models.CharField(max_length=120) date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, blank=True) objects = NewManager() class NewForm(ModelForm): class Meta: model = New fields = ['post'] # widgets = {'post': Textarea(attrs={'cols': 80, 'rows': 20}) def save_new(request): if request.method == 'POST': created_by = User.objects.get(created_by = user) date = request.POST.get('date', '') post = request.POST.get('post', '') new_obj = New(post=post, date=date, created_by=created_by) new_obj.save() return HttpResponseRedirect('/') else: form = NewForm() return render_to_response('news/new_form.html', {'form': form},context_instance=RequestContext(request)) i didn't mention imports here - they're done right, anyway. my mistake is in views.py, when i try to save it says: local variable 'created_by' referenced before assignment it i put created_py as a parameter, the save needs more parameters... it is really weird help please!!

    Read the article

  • Data-only static libraries with GCC

    - by regularfry
    How can I make static libraries with only binary data, that is without any object code, and make that data available to a C program? Here's the build process and simplified code I'm trying to make work: ./datafile: abcdefghij Makefile: libdatafile.a: ar [magic] datafile main: libdatafile.a gcc main.c libdatafile.a -o main main.c: #define TEXTPTR [more magic] int main(){ char mystring[11]; memset(mystring, '\0', 11); memcpy(TEXTPTR, mystring, 10); puts(mystring); puts(mystring); return 0; } The output I'm expecting from running main is, of course: abcdefghijabcdefghij My question is: what should [magic] and [more magic] be?

    Read the article

  • I have an Xcode static library project, how do I add a test target to it so I can run it there? (Ins

    - by zekel
    I want to be able to test library code in the library target so I don't have to switch over to a separate project to run it. I see how to add a target, but I'm not sure how to set it up to run like the "Command Line Tool" project template does. I tried adding a new "Shell Tool" target, but I don't know how to make it run like one. What build settings do I have to add to that target? What files (main.m?) do I need to start it up?

    Read the article

  • Common files in output directories in a C# program

    - by Net Citizen
    My VS2008 solution has the following setup. Program1 Program2 Common.dll (used and referenced by both Program1 and Program2) In debug mode I like to set my output directory to Program Files\Productname, because some code will get the exe path for various reasons. My problem is that Program1 when compiled, will give an error that it could not copy Common.dll if Program2 is started. And vise versa. The annoyance here is that I don't even make changes to Common.dll that often, but 100% of the time it will try to copy it, not only when there are changes. I end up having to close all programs, and then build and then start them. So my question is, how can I only have VS2008 copy the Common.dll if there are changes inside the Common.dll project?

    Read the article

  • using different string files in android

    - by boreas
    I'm porting my iPhone app to android and I'm having a problem with the string files now. The app is a translation tool and users can switch the languages, so all the localized strings are in both languages and they are independent from what locale the OS is running. For iOS version I have different files like de.strings, en.strings and fr.strings and so on. For every target with specified language pair I read the strings from the string tables, e.g. for de-fr I will include de.strings and fr.strings in project and set the name of the string tables in the info-list file and read strings from them. In the end I have one project containing different targets (with different info-list files) and all are well configured. I'm intending to do the same on android platform, but Is only one strings.xml allowed per project? How do I set different build target? How do I specify per target which strings.xml it should read?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Le virus Flame développé par les Etats-Unis et Israël, selon le Washington Post, pour dérober des données sur le programme nucléaire iranien

    Le virus Flame développé par les Etats-Unis et Israël selon le Washington Post, pour dérober des données sur le programme nucléaire iranien Mise à jour du 20/06/2012, par Hinault Romaric Flame, le virus informatique d'une complexité hors norme qui a beaucoup fait parler de lui en début de ce mois, serait une oeuvre des Etats-Unis en collaboration avec Israël, selon le Washington Post, citant comme source des responsables occidentaux proches du dossier. Considéré comme la plus grosse arme de cyber-espionnage jamais conçue, Flame a été développé avec pour objectif de dérober des données sur le programme nucléaire iranien, afin d...

    Read the article

  • Is it okay to call exception-triggered debugging "post-mortem debugging"?

    - by cool-RR
    I heard the term "post-mortem debugging", and Wikipedia says it's debugging done after the program has crashed. I often debug Python apps using a debugger that stops execution once an important-enough exception has been raised. Then I can use the debug probe to investigate. Does this count as "post-mortem debugging"? Because the program doesn't really crash. EDIT: If the answer is no, then what name would you use for the kind of debugging that I described?

    Read the article

  • libgtk2.0-common fails to build with Gdk-2.0.gir error, Type reference 'GdkPixbuf' not found

    - by Stefano Palazzo
    I'm trying to build gtk, but it fails. Here's what I'm doing: sudo apt-get build-dep libgtk2.0-common sudo apt-get source libgtk2.0-common cd gtk+2.0-2.22.0/ sudo gedit gtk/gtktreeview.c & #...editing a few files (or not, it's the same error) sudo ./configure --prefix=/usr sudo make The compilation runs for a while and then quits: Gdk-2.0.gir: error: Type reference 'GdkPixbuf' not found ... make: *** [all] Error 2 What am I doing wrong?

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Installing a directory with a Debian Package

    - by Meisie
    Hi guys I want to create a Debian Package that installs a bunch of Folders to a system but I can't get it working. The Package gets created without any errors and lintian also says it's okay but installing does nothing. The rules file looks like this: <#>!/usr/bin/make -f logs = $(CURDIR)/shell_logs/ DEST1 = /opt/Pacetutor/ build: build-stamp build-stamp: dh_testdir touch build-stam clean: dh_testdir dh_testroot rm -f build-stamp dh_clean install: build clean $(logs) dh_testdir dh_testroot dh_prep dh_installdirs mkdir -m 755 -p $(DEST1) <- this is propably optional or not needed -> cp -r $(logs) $(DEST1) <- using mv works but thats not what I want. -> binary-indep: build install dh_testdir dh_testroot dh_installchangelogs dh_installdocs dh_installexamples dh_installman dh_link dh_compress dh_fixperms dh_installdeb dh_gencontrol dh_md5sums dh_builddeb binary-arch: build install binary: binary-indep binary-arch .PHONY: build clean binary-indep binary-arch binary install

    Read the article

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

  • How to keep asm output from Linux kernel module build

    - by fastmonkeywheels
    I'm working on a Linux kernel module for a 2.6.x kernel and I need to view the assembly output, though it's currently being done as a temporary file an deleted afterwords. I'd like to have the assembly output mixed with my C source file so I can easily trace where my problem lies. This is for an ARMv6 core and apparently objdump doesn't support this architecture. I've included my makefile below. ETREP=/xxSourceTreexx/ GNU_BIN=$(ETREP)/arm-none-linux-gnueabi/bin CROSS_COMPILE := $(GNU_BIN)/arm-none-linux-gnueabi- ARCH := arm KDIR=$(ETREP)/linux-2.6.31/ MAKE= CROSS_COMPILE=$(CROSS_COMPILE) ARCH=$(ARCH) make obj-m += xxfile1xx.o all: $(MAKE) -C $(KDIR) M=$(PWD) modules clean: $(MAKE) -C $(KDIR) M=$(PWD) clean

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >