Search Results

Search found 6211 results on 249 pages for 'svn export'.

Page 224/249 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • How do I manage dependencies for automated builds on my build server?

    - by Tom Pickles
    I'm trying to implement continuous integration into our day to day workings. In our team, we're moving from just building our code in Visual Studio on our workstations and deploying, to using MSBuild.exe and automating on our build server (which is Jenkins) without the use of Visual Studio. We have external dependencies to references such as Automap in our projects. Because the automap (for example) dll isn't on the build server, the msbuild execution fails, for obvious reasons. There are other dll's which I need to be part of the build, I'm just using automap as an example. So what's the best way to get any dependencies onto the build server as part of the automated build? I've seen references to using a 'lib' folder, but I don't really understand where I should be putting it (in my project, filesystem, SVN ...?), and how the build server will get to it. I've also read that NuGet can do something with dependencies, but my build server isn't connected to the internet, and I don't understand how I can get my build to pull a NuGet package I may have created, and how it works together. Edit: I'm using subversion and we cannot use TeamCity as we would have to buy it and there's zero chance of funding.

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Decrypting a string in C# 3.5 which was encrypted with openssl in php 5.3.2

    - by panny
    Hi everyone, maybe someone can clear me up. I have been surfing on this a while now. I used openssl from console to create a root certificate for me (privatekey.pem, publickey.pem, mycert.pem, mycertprivatekey.pfx). See the end of this text on how. The problem is still to get a string encrypted on the PHP side to be decrypted on the C# side with RSACryptoServiceProvider. Any ideas? PHP side I used the publickey.pem to read it into php: $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate"

    Read the article

  • Git + GitHub + Heroku

    - by Haseeb Khan
    Hi All, I am new to the world of Git, GitHub and Heroku. So far, I am enjoying this paradigm but coming from a background with SVN, things seems a bit complicated to me in the world of Git. I am facing a problem for which I am looking for a solution. Scenario: I have setup a new private project on GitHub. I forked the private project and now I have the following structure in my branch: /project /apps /my-apps /my-app-1 .... /my-app-2 .... /your-apps /your-app-1 .... /your-app-2 .... /plugins .... I can commit the code in my Fork on GitHub from my machine in any of the folders I want. Later on, these would be pulled into the master repository by the admin of the project. For every individual application in the apps folder, I have setup an app on Heroku which is a Git Repo in itself where I push my changes when I am done with the user stories from my local machine. In short, every app in the apps folder is a Rails App hosted on Heroku. Problem: What I want is that when I push my changes into Heroku, they can be committed into my project fork on GitHub as well, so, it also has the latest code all the time. The issue I see is that the code on Heroku is a Git Repo while the folders which I have on GitHub are part of a Repo. So far, what I have researched is that there is something known as Submodule in the Git World which can come to the rescue, however, I have not been able to find some newbie instructions. Can someone in the community be kind enough to share thoughts and help me to identify the solution of this problem? Thanks in advance. Regards, Haseeb Khan haseeb [AT] tkxel.com TkXel

    Read the article

  • Exception error in Erlang

    - by Jim
    So I've been using Erlang for the last eight hours, and I've spent two of those banging my head against the keyboard trying to figure out the exception error my console keeps returning. I'm writing a dice program to learn erlang. I want it to be able to call from the console through the erlang interpreter. The program accepts a number of dice, and is supposed to generate a list of values. Each value is supposed to be between one and six. I won't bore you with the dozens of individual micro-changes I made to try and fix the problem (random engineering) but I'll post my code and the error. The Source: -module(dice2). -export([d6/1]). d6(1) - random:uniform(6); d6(Numdice) - Result = [], d6(Numdice, [Result]). d6(0, [Finalresult]) - {ok, [Finalresult]}; d6(Numdice, [Result]) - d6(Numdice - 1, [random:uniform(6) | Result]). When I run the program from my console like so... dice2:d6(1). ...I get a random number between one and six like expected. However when I run the same function with any number higher than one as an argument I get the following exception... **exception error: no function clause matching dice2:d6(1, [4|3]) ... I know I I don't have a function with matching arguments but I don't know how to write a function with variable arguments, and a variable number of arguments. I tried modifying the function in question like so.... d6(Numdice, [Result]) - Newresult = [random:uniform(6) | Result], d6(Numdice - 1, Newresult). ... but I got essentially the same error. Anyone know what is going on here?

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Automatic Deployment of Windows Application

    - by dileepkrishnan
    Hi, We have setup continuos integration in our development environment using SVN, CC.Net, MSBuild and Nunit. Now, we want to automate the process of moving (copying) builds from one stage to another like this: Whenever a new build succeeds in Dev, that should be copied automatically to the QA server (a folder on the QA server, to be exact) Whenever a QA build succeeds tests in QA, that QA build should be copied to the UAT server (a folder on the UAT server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when QA succeeds. Whenever a UAT build succeeds tests in UAT, that should be copied to the PROD server (a folder on the PROD server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when UAT succeeds. How do I implement this? Can this be done using CC.Net alone? Or, can this be done using MSBuild? Or, do I need to employ both? Please advise what exactly needs to be done. Thanks Dileep Krishnan

    Read the article

  • Why is my Repeater null in code behind?

    - by Rob Stevenson-Leggett
    I'm just starting a new project and I am getting some really weird stuff happening. ASP.NET 3.5, VS2008. I've tried rebuild, close VS, delete everything and get from svn again but I cannot understand why the repeater in the following is null on page_load. I know this is going to be a headslapping moment. Help me out? Markup: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="GalleryControl.ascx.cs" Inherits="Site.UserControls.GalleryControl" %> <asp:Repeater ID="rptGalleries" runat="server"> <HeaderTemplate><ul></HeaderTemplate> <ItemTemplate> <li>wqe</li> </ItemTemplate> <FooterTemplate></ul></FooterTemplate> </asp:Repeater> Code behind public partial class GalleryControl : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { rptGalleries.DataSource = new[] {1, 2, 3, 4, 5}; rptGalleries.DataBind(); } } Why is my repeater null? What the F is going on?

    Read the article

  • Eclipse PDT "tips" ?

    - by Pascal MARTIN
    Hi ! (Yes, this is a quite opened and general and subjective question -- it's by design, cause I want tips you think are great !) I'm using Eclipse PDT 2.1 to work in PHP, either for small and/or big projects -- I've been doing so for quite some times, now, actually (since before 1.0 stable, if I remember well)... I was wondering if any of you did know "tips" to be more efficient. Let met explain more in details : I know about things like plugins like Aptana (better editor for JS/CSS), Subversive (for SVN access), RSE, Filesync, integrating Xdebug's debugger, ... What I mean by "tips" is more some little things you discovered one day and since use all the time -- and allow you to be more efficient in your PHP projects. Some examples of "tips" that come to my mind, and that already know and use : ctrl+space to open the list of suggestions for functions / variables names ctrl+shift+R (navigate > open resource) to open a popup which show only files which names contain what you type ; ie, quick opening of files this one might be the perfect example : I know this one is not often known by coworkers and they find it as useful as I do ; so, I guess there might be lots of other things like this one I don't know myself ^^ ctrl+M to switch to full-screen view for the editor (instead of double-click on tabs bar) shift+F2 while on a function name, to open it's page if the PHP manual in a browser Attention Mac Users use Command instead Control. I guess you get the point ; but I'm really open to any suggestion (be it eclipse-related in general, of more PHP/PDT-specific) that can help be be more efficient :-) Anyway, thanks in advance for your help !

    Read the article

  • Producing a Form from an Overlay from Reporting Services rdlc Reports

    - by Mike Wills
    I am not sure what you call it in other technologies, on the IBM i (or iSeries) we call it overlays. The overlay is an image of a form that is stored on the server then a program generates the form with fields from the database so you can eliminate preprinted forms. I had a problem last year with the method I was trying at the time. It was a rush job at the time to be revisited at a later point. The work-around at the time was to export to PDF. So now it is "later" and once again is a rush (imagine that). This is all done through a web-based interface. So how do you generate forms from something that was once a preprinted form? What method do you recommend? This is a legal form and must be filled out a certain way and can have many in a batch (up to 50 or so). I would prefer to not have them print one page at a time. Any ideas would be appreciated!

    Read the article

  • Cryptic Erlang Errors

    - by Jim
    Okay so I started learning erlang recently but am baffled by the errors it keeps returning. I made a bunch of changes but I keep getting errors. The syntax is correct as far as I can tell but clearly I'm doing something wrong. Have a look... -module(pidprint). -export([start/0]). dostuff([]) - receive begin - io:format("~p~n", [This is a Success]) end. sender([N]) - N ! begin, io:format("~p~n", [N]). start() - StuffPid = spawn(pidprint, dostuff, []), spawn(pidprint, sender, [StuffPid]). Basically I want to compile the script, call start, spawn the "dostuff" process, pass its process identifier to the "sender" process, which then prints it out. Finally I want to send a the atom "begin" to the "dostuff" process using the process identifier initially passed into sender when I spawned it. The errors I keep getting occur when I try to use c() to compile the script. Here they are.. ./pidprint.erl:6: syntax error before: '-' ./pidprint.erl:11: syntax error before: ',' What am I doing wrong?

    Read the article

  • Did we always have to register to download the Java 5 JDK, or is this new Oracle fun?

    - by Ukko
    I could swear that just a couple of months ago I downloaded a copy of the Java 1.5 SE JDK and I did not have to give them information on my first born. Today, I had to go through the register-and-we-will-send-you-a-link-someday dance. I have not received the link yet, so I thought I would ask about it here. What is special about the Java 5 JDK? I can get 6 just by clicking, is this a stick to get us to migrate to Java 6? Am I just not remembering doing this before? What marketing genius thought this would be a value add for Java? "If we make them sweat for the JDK they won't just delete it willy-nilly the next time?" Does everyone picture the people designing systems like this as mustache twirling Snidely Whiplash clones like I do? Did I just miss the link for the Secret Squirrel route to the download page? Finally, I am in the U.S. so I should not have to worry about export restrictions. Any thoughts?

    Read the article

  • Why is a Silverlight application created from an exported template show a blank screen in the browse

    - by Edward Tanguay
    I created a silverlight app (without website) named TestApp, with one TextBox: <UserControl x:Class="TestApp.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot"> <TextBlock Text="this is a test"/> </Grid> </UserControl> I press F5 and see "this is a test" in my browser (firefox). I select File | Export Template | name it TestAppTemplate and save it. I create a new silverlight app based on the above template. The MainPage.xaml has the exact same XAML as above. I press F5 and see a blank screen in my browser. I look at the HTML source of both of these and they are identical. Everything I have compared in both projects is identical. What do I have to do so that a Silverlight application which is created from my exported template does not show a blank screen?

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

  • Serializing Configurations for a Dependency Injection / Inversion of Control

    - by Joshua Starner
    I've been researching Dependency Injection and Inversion of Control practices lately in an effort to improve the architecture of our application framework and I can't seem to find a good answer to this question. It's very likely that I have my terminology confused, mixed up, or that I'm just naive to the concept right now, so any links or clarification would be appreciated. Many examples of DI and IoC containers don't illustrate how the container will connect things together when you have a "library" of possible "plugins", or how to "serialize" a given configuration. (From what I've read about MEF, having multiple declarations of [Export] for the same type will not work if your object only requires 1 [Import]). Maybe that's a different pattern or I'm blinded by my current way of thinking. Here's some code for an example reference: public abstract class Engine { } public class FastEngine : Engine { } public class MediumEngine : Engine { } public class SlowEngine : Engine { } public class Car { public Car(Engine e) { engine = e; } private Engine engine; } This post talks about "Fine-grained context" where 2 instances of the same object need different implementations of the "Engine" class: http://stackoverflow.com/questions/2176833/ioc-resolve-vs-constructor-injection Is there a good framework that helps you configure or serialize a configuration to achieve something like this without hard coding it or hand-rolling the code to do this? public class Application { public void Go() { Car c1 = new Car(new FastEngine()); Car c2 = new Car(new SlowEngine()); } } Sample XML: <XML> <Cars> <Car name="c1" engine="FastEngine" /> <Car name="c2" engine="SlowEngine" /> </Cars> </XML>

    Read the article

  • Pull specific information from a long list with Perl

    - by melignus
    The file that I've got to work with here is the result of an LDAP extraction but I need to ultimately get the information formatted over to something that a spreadsheet can use. So, the data is as follows: DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: John Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Jane Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Ted Doe name: ##userName The format that I need to export to is: firstName lastName userName firstName lastName userName firstName lastName userName Where the spaces are tabs so I can then impor that file into a database. I have experience doing this in VBScript but I'm trying to switch over to using Perl for as much server administration as possible. I'm not sure on the syntax for what I want which is basically while not endoffile{ detect "displayName: " & $firstName & " " & $lastName detect "name: ##" & $userName write $firstName tab $lastName tab $userName to file } Also if someone could point me to a resource specifically on the text parsing syntax that Perl uses, I'd be very grateful. Most of the resources that I've come across haven't been very helpful.

    Read the article

  • Convert SWF file to FLV with FFMPEG & getting error "could not find codec parameters"

    - by Ritesh
    Hi I am trying to convert SWF file to FLV, but i am getting same eror C:\Users\Administrator>C:/ffmpeg/ffmpeg.exe -i C:/xampplite/htdocs/ffmpeg/1.swf \ C:/xampplite/htdocs/ffmpeg/file1.flv FFmpeg version SVN-r16573, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-cflags=-fno-common --enable-memalign-hack --enable-pthreads --enable-libmp3lame --enable-libxvid --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libfaac --enable-libgsm --enable-libx264 --enable-libschroedinger --enable-avisynth --enable-swscale --enable-gpl libavutil 49.12. 0 / 49.12. 0 libavcodec 52.10. 0 / 52.10. 0 libavformat 52.23. 1 / 52.23. 1 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 6. 1 / 0. 6. 1 built on Jan 13 2009 02:57:09, gcc: 4.2.4 C:/xampplite/htdocs/ffmpeg/1.swf: could not find codec parameters Please solve this problem, what i am doing wrong??

    Read the article

  • Summarising grouped records in a dataframe in R (...again)

    - by monch1962
    Hello all, (I tried to ask this question earlier today, but later realised I over-simplified the question; the answers I received were correct, but I couldn't use them because of my over-simplification of the problem in the original question. Here's my 2nd attempt...) I have a data frame in R that looks like: "Timestamp", "Source", "Target", "Length", "Content" 0.1 , P1 , P2 , 5 , "ABCDE" 0.2 , P1 , P2 , 3 , "HIJ" 0.4 , P1 , P2 , 4 , "PQRS" 0.5 , P2 , P1 , 2 , "ZY" 0.9 , P2 , P1 , 4 , "SRQP" 1.1 , P1 , P2 , 1 , "B" 1.6 , P1 , P2 , 3 , "DEF" 2.0 , P2 , P1 , 3 , "IJK" ... and I want to convert this to: "StartTime", "EndTime", "Duration", "Source", "Target", "Length", "Content" 0.1 , 0.4 , 0.3 , P1 , P2 , 12 , "ABCDEHIJPQRS" 0.5 , 0.9 , 0.4 , P2 , P1 , 6 , "ZYSRQP" 1.1 , 1.6 , 0.5 , P1 , P2 , 4 , "BDEF" ... Trying to put this into English, I want to group consecutive records with the same 'Source' and 'Target' together, then print out a single record per group showing the StartTime, EndTime & Duration (=EndTime-StartTime) for that group, along with the sum of the Lengths for that group, and a concatenation of the Content (which will all be strings) in that group. The TimeOffset values will always increase throughout the data frame. I had a look at melt/recast and have a feeling that it could be used to solve the problem, but couldn't get my head around the documentation. I suspect it's possible to do this within R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • Is it possible to parameterize a MEF import?

    - by Josh Einstein
    I am relatively new to MEF so I don't fully understand the capabilities. I'm trying to achieve something similar to Unity's InjectionMember. Let's say I have a class that imports MEF parts. For the sake of simplicity, let's take the following class as an example of the exported part. [Export] [PartCreationPolicy(CreationPolicy.NonShared)] public class Logger { public string Category { get; set; } public void Write(string text) { } } public class MyViewModel { [Import] public Logger Log { get; set; } } Now what I'm trying to figure out is if it's possible to specify a value for the Category property at the import. Something like: public class MyViewModel { [MyImportAttribute(Category="MyCategory")] public Logger Log { get; set; } } public class MyOtherViewModel { [MyImportAttribute(Category="MyOtherCategory")] public Logger Log { get; set; } } For the time being, what I'm doing is implementing IPartImportsSatisfiedNotification and setting the Category in code. But obviously I would rather keep everything neatly in one place.

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Need help with version number schemes

    - by Davy8
    I know the standard Major.Minor.Build.Revision but there's several considerations for us that are somewhat unique -We do internal releases almost daily, occasionally more than once a day. -Windows Installer doesn't check Revision so that's almost moot for our purposes. -Major and Minor numbers ideally are only updated for public releases and should be done manually. -That leaves the Build # that needs to be automatically updated. -We want internal releases to be able to be performed from any developer's machine so that leaves out using x.x.* in Visual Studio because different numbers could be generated from different machines and each build isn't guaranteed to be larger than the previous. -We have about 15 or so projects as part of the product so saving the version numbers in SVN isn't ideal since every release we'd have commit all those files. Given those criteria I can't really come up with a good versioning scheme. The last 2 criteria could be dropped but meeting all of those seems ideal. A date stamp is insufficient because we might do more than one a day, and given the max size of Uint32 (around 64000) (Actually using WiX it complains about numbers higher than Int32.MaxValue) a date/time won't fit.

    Read the article

  • Google MapView doesn't work after signed the app.

    - by user164589
    Hi guys, I am facing to android application signing problem. My application contains Google MapView. When I compile the app and run on the emulator, MapView works fine. But signed the app, MapView doesn't work. I've get Google Map API. This works on the simulator. I could sign the app once 2 months ago. Then I've upgraded the app. Now I need to sign the app again. Actually I don't know why signed app's mapView doesn't work. How to fix it ? Please advice. I used following steps when sign the app: Run Eclipse. Select the project. Right Click - Android Tools - Export Signed Application Package - Then Filled forms. (In forms, Validity years: 200, and all passwords are same.) Can you suggest me ? Thanks in advance.

    Read the article

  • Android Cursor strange behaviour

    - by sandis
    After many houres of bug searching in a big app, I have finally tracked down the bug. This logging captures the problem: Log.d(TAG,"buildList, DBresult.getInt(1): "+DBresult.getInt(1)); Log.d(TAG,"buildList, DBresult.getString(1): "+DBresult.getString(1)); Log.d(TAG,"buildList, DBresult.getInt(4): "+DBresult.getInt(4)); Log.d(TAG,"buildList, DBresult.getString(4): "+DBresult.getString(4)); The resulting output: 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getInt(1): 0 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getString(1): false 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getInt(4): 0 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getString(4): true There are no backgroung threads running. As you can see the problem is that '0' is interpreted as false on one occasion, and as true on another. Since I am completely lost on how this can happen, I dont know how to proceed. What could possibly result in such a behaviour? Both the columns are of the type "boolean", i.e a numeric in sqlite. Unfortunately the string returned is the correct value, while the integer is always 0. If I export the database to my computer and check it with SQlite Administrator I can see that the values are correctly set, it is only the getInt()-function that always returns 0. I know for a fact that this works in other apps I have coded, and I dont know why this has stopped working. I have tried compiling the code under 2.0, 2.0.1 and 2.1, and it always appears. I can make my app runnable again by getting boolean values like this: myBool= (DBresult.getString(0).equals("true")); but that is both ugly and not optimized. Any suggestions on what is causing this behaviour is welcome. Cheers,

    Read the article

  • python on apache - getting 404

    - by Kirby
    I edited this question after i found a solution... i need to understand why the solution worked instead of my method? This is likely to be a silly question. I tried searching other questions that are related... but to no avail. i am running Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 i have a script called test.py #! /usr/bin/python print "Content-Type: text/html" # HTML is following print # blank line, end of headers print "hello world" running it as an executable works... /var/www$ ./test.py Content-Type: text/html hello world when i run http://localhost/test.py i get a 404 error. What am i missing? i used this resource to enable python parsing on apache. http://ubuntuforums.org/showthread.php?t=91101 From that same thread... the following code worked.. why? #!/usr/bin/python import sys import time def index(req): # Following line causes error to be sent to browser # rather than to log file (great for debug!) sys.stderr = sys.stdout #print "Content-type: text/html\n" #print """ blah1 = """<html> <head><title>A page from Python</title></head> <body> <h4>This page is generated by a Python script!</h4> The current date and time is """ now = time.gmtime() displaytime = time.strftime("%A %d %B %Y, %X",now) #print displaytime, blah1 += displaytime #print """ blah1 += """ <hr> Well House Consultants demonstration </body> </html> """ return blah1

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >