Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 922/998 | < Previous Page | 918 919 920 921 922 923 924 925 926 927 928 929  | Next Page >

  • How can I import one Gradle script into another?

    - by Ant
    Hi all, I have a complex gradle script that wraps up a load of functionality around building and deploying a number of netbeans projects to a number of environments. The script works very well, but in essence it is all configured through half a dozen maps holding project and environment information. I want to abstract the tasks away into another file, so that I can simply define my maps in a simple build file, and import the tasks from the other file. In this way, I can use the same core tasks for a number of projects and configure those projects with a simple set of maps. Can anyone tell me how I can import one gradle file into another, in a similar manner to Ant's task? I've trawled Gradle's docs to no avail so far. Additional Info After Tom's response below, I thought I'd try and clarify exactly what I mean. Basically I have a gradle script which runs a number of subprojects. However, the subprojects are all Netbeans projects, and come with their own ant build scripts, so I have tasks in gradle to call each of these. My problem is that I have some configuration at the top of the file, such as: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] I then generate tasks such as: projects.each({ task "checkout_$it.shortname" << { // Code to for example check module out from cvs using config from 'it'. } }) I have many of these sort of task generation snippets, and all of them are generic - they entirely depend on the config in the projects list. So what I want is a way to put this in a separate script and import it in the following sort of way: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] import("tasks.gradle") // This will import and run the script so that all tasks are generated for the projects given above. So in this example, tasks.gradle will have all the generic task generation code in, and will get run for the projects defined in the main build.gradle file. In this way, tasks.gradle is a file that can be used by all large projects that consist of a number of sub-projects with Netbeans ant build files.

    Read the article

  • PNGException "crc corruption" when attempting to create ImageIcon objects from ZIP archive

    - by Nathan Strong
    I've got a ZIP file containing a number of PNG images that I am trying to load into my Java application as ImageIcon resources directly from the archive. Here's my code: import java.io.*; import java.util.Enumeration; import java.util.zip.*; import javax.swing.ImageIcon; public class Test { public static void main( String[] args ) { if( args.length == 0 ) { System.out.println("usage: java Test.java file.zip"); return; } File archive = new File( args[0] ); if( !archive.exists() || !archive.canRead() ) { System.err.printf("Unable to find/access %s.\n", archive); return; } try { ZipFile zip = new ZipFile(archive); Enumeration <? extends ZipEntry>e = zip.entries(); while( e.hasMoreElements() ) { ZipEntry entry = (ZipEntry) e.nextElement(); int size = (int) entry.getSize(); int count = (size % 1024 == 0) ? size / 1024 : (size / 1024)+1; int offset = 0; int nread, toRead; byte[] buffer = new byte[size]; for( int i = 0; i < count; i++ ) { offset = 1024*i; toRead = (size-offset > 1024) ? 1024 : size-offset; nread = zip.getInputStream(entry).read(buffer, offset, toRead); } ImageIcon icon = new ImageIcon(buffer); // boom -- why? } zip.close(); } catch( IOException ex ) { System.err.println(ex.getMessage()); } } } The sizes reported by entry.getSize() match the uncompressed size of the PNG files, and I am able to read the data out of the archive without any exceptions, but the creation of the ImageIcon blows up. The stacktrace: sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) Can anyone shed some light on it? Google hasn't turned up any useful information.

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • How ca I return a value from a function

    - by Shadi Al Mahallawy
    I used a function to calculate information about certain instructions I intialized in a map,like this void get_objectcode(char*&token1,const int &y) { map<string,int> operations; operations["ADD"] = 18; operations["AND"] = 40; operations["COMP"] = 28; operations["DIV"] = 24; operations["J"] = 0X3c; operations["JEQ"] =30; operations["JGT"] =34; operations["JLT"] =38; operations["JSUB"] =48; operations["LDA"] =00; operations["LDCH"] =50; operations["LDL"] =55; operations["LDX"] =04; operations["MUL"] =20; operations["OR"] =44; operations["RD"] =0xd8; operations["RSUB"] =0x4c; operations["STA"] =0x0c; operations["STCH"] =54; operations["STL"] =14; operations["STSW"] =0xe8; operations["STX"] =10; operations["SUB"] =0x1c; operations["TD"] =0xe0; operations["TIX"] =0x2c; operations["WD"] =0xdc; if ((operations.find("ADD")->first==token1)||(operations.find("AND")->first==token1)||(operations.find("COMP")->first==token1) ||(operations.find("DIV")->first==token1)||(operations.find("J")->first==token1)||(operations.find("JEQ")->first==token1) ||(operations.find("JGT")->first==token1)||(operations.find("JLT")->first==token1)||(operations.find("JSUB")->first==token1) ||(operations.find("LDA")->first==token1)||(operations.find("LDCH")->first==token1)||(operations.find("LDL")->first==token1) ||(operations.find("LDX")->first==token1)||(operations.find("MUL")->first==token1)||(operations.find("OR")->first==token1) ||(operations.find("RD")->first==token1)||(operations.find("RSUB")->first==token1)||(operations.find("STA")->first==token1)||(operations.find("STCH")->first==token1)||(operations.find("STCH")->first==token1)||(operations.find("STL")->first==token1) ||(operations.find("STSW")->first==token1)||(operations.find("STX")->first==token1)||(operations.find("SUB")->first==token1) ||(operations.find("TD")->first==token1)||(operations.find("TIX")->first==token1)||(operations.find("WD")->first==token1)) { int y=operations.find(token1)->second; //cout<<hex<<y<<endl; } return ; } which if I cout y in the function gives me an answer just fine which is what i need but there is a problem tring to return the value from the function so that I could use it outside the function , it gives a whole different answer, what is the problem

    Read the article

  • How to add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: PropConnect.Model.UserAuditLog, Entity: PropConnect.Model.UserAuditLog at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • Can I someone point to me what I did wrong? Trying to map VB to Java using JNA to access the library

    - by henry
    Original Working VB_Code Private Declare Function ConnectReader Lib "rfidhid.dll" () As Integer Private Declare Function DisconnectReader Lib "rfidhid.dll" () As Integer Private Declare Function SetAntenna Lib "rfidhid.dll" (ByVal mode As Integer) As Integer Private Declare Function Inventory Lib "rfidhid.dll" (ByRef tagdata As Byte, ByVal mode As Integer, ByRef taglen As Integer) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim desc As String desc = "1. Click ""Connect"" to talk to reader." & vbCr & vbCr desc &= "2. Click ""RF On"" to wake up the TAG." & vbCr & vbCr desc &= "3. Click ""Read Tag"" to get tag PCEPC." lblDesc.Text = desc End Sub Private Sub cmdConnect_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdConnect.Click If cmdConnect.Text = "Connect" Then If ConnectReader() Then cmdConnect.Text = "Disconnect" Else MsgBox("Unable to connect to RFID Reader. Please check reader connection.") End If Else If DisconnectReader() Then cmdConnect.Text = "Connect" End If End If End Sub Private Sub cmdRF_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdRF.Click If cmdRF.Text = "RF On" Then If SetAntenna(&HFF) Then cmdRF.Text = "RF Off" End If Else If SetAntenna(&H0) Then cmdRF.Text = "RF On" End If End If End Sub Private Sub cmdReadTag_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdReadTag.Click Dim tagdata(64) As Byte Dim taglen As Integer, cnt As Integer Dim pcepc As String pcepc = "" If Inventory(tagdata(0), 1, taglen) Then For cnt = 0 To taglen - 1 pcepc &= tagdata(cnt).ToString("X2") Next txtPCEPC.Text = pcepc Else txtPCEPC.Text = "ReadError" End If End Sub Java Code (Simplified) import com.sun.jna.Library; import com.sun.jna.Native; public class HelloWorld { public interface MyLibrary extends Library { public int ConnectReader(); public int SetAntenna (int mode); public int Inventory (byte tagdata, int mode, int taglen); } public static void main(String[] args) { MyLibrary lib = (MyLibrary) Native.loadLibrary("rfidhid", MyLibrary.class); System.out.println(lib.ConnectReader()); System.out.println(lib.SetAntenna(255)); byte[] tagdata = new byte[64]; int taglen = 0; int cnt; String pcepc; pcepc = ""; if (lib.Inventory(tagdata[0], 1, taglen) == 1) { for (cnt = 0; cnt < taglen; cnt++) pcepc += String.valueOf(tagdata[cnt]); } } } The error happens when lib.Inventory is run. lib.Inventory is used to get the tag from the RFID reader. If there is no tag, no error. The error code An unexpected error has been detected by Java Runtime Environment: EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0b1d41ab, pid=5744, tid=4584 Java VM: Java HotSpot(TM) Client VM (11.2-b01 mixed mode windows-x86) Problematic frame: C [rfidhid.dll+0x141ab] An error report file with more information is saved as: C:\eclipse\workspace\FelmiReader\hs_err_pid5744.log

    Read the article

  • Set maven to use archiva repositories WITHOUT using activeByDefault?

    - by Sam Levin
    I am very close to finally having a working setup with archiva and maven. The last thing that's really boggling me, is how to set up my internal and snapshot repositories - without using a profile which contains activeByDefault set to true. I am using a SUPER super pom - a company-wide pom which contains distributionManagement information for releases. I was thinking that I could specify the repositories in this pom, and configure the authentication settings in settings.xml? Can I use repositories tag without a profile? There should be no "profile" for my internal and snapshot repositories, as they will never change... What I'm trying to steer clear from, is using a "default" profile, which is active all the time. I hear activeByDefault is NOT a best practice and I don't intend to use it. With that said, how should I go about doing this? My internal repo is a mirror of the maven central repo, so I would like to lock down my developers to ONLY use our internal artifact server. Remember - I do NOT want a profile with activeByDefault set to true. I cannot stress this enough! Should I use Maven mirrors? Should I "add" additional repositories? If I take the repositories tag instead of the mirrors tag, will maven force builds to use ONLY my archiva settings, instead of the default maven central? Or is what I seek to accomplish able to be done using only the mirrors tag in maven? I know how to configure repo credentials when using repositories tag, but not with mirrors. How is this done? Is providing credentials for anything in mirrors tags the same as for anything in repositories tags? Am I missing something obvious? I've had it up to here with getting things up and running using maven. I know it will be worthwhile in the end, but it is surely causing me a ton of aggravation and resources seem to be sparse. Either that, or people are content using it however they please without regard to best-practices. Thank you

    Read the article

  • Turn class "Interfaceable"

    - by scooterman
    Hi folks, On my company system, we use a class to represent beans. It is just a holder of information using boost::variant and some serialization/deserialization stuff. It works well, but we have a problem: it is not over an interface, and since we use modularization through dlls, building an interface for it is getting very complicated, since it is used in almost every part of our app, and sadly interfaces (abstract classes ) on c++ have to be accessed through pointers, witch makes almost impossible to refactor the entire system. Our structure is: dll A: interface definition through abstract class dll B: interface implementation there is a painless way to achieve that (maybe using templates, I don't know) or I should forget about making this work and simply link everything with dll B? thanks Edit: Here is my example. this is on dll A BeanProtocol is a holder of N dataprotocol itens, wich are acessed by a index. class DataProtocol; class UTILS_EXPORT BeanProtocol { public: virtual DataProtocol& get(const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void getFields(std::list<unsigned int>&) const { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void fromString(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toString() const { throw std::runtime_error("Not implemented"); } virtual void fromBinary(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toBinary() const { throw std::runtime_error("Not implemented"); } virtual BeanProtocol& operator=(const BeanProtocol&) { throw std::runtime_error("Not implemented"); } virtual bool operator==(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator!=(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator==(const char*) const { throw std::runtime_error("Not implemented"); } virtual bool hasKey(unsigned int field) const { throw std::runtime_error("Not implemented"); } }; the other class (named GenericBean) implements it. This is the only way I've found to make this work, but now I want to turn it in a truly interface and remove the UTILS_EXPORT (which is an _declspec macro), and finally remove the forced linkage of B with A.

    Read the article

  • How to set property only on second column of a ListView?

    - by Lernkurve
    Introduction I have a ListView and want to format only the second column. The following XAML code does that: <ListView x:Name="listview"> <ListView.View> <GridView> <GridViewColumn Header="Property" DisplayMemberBinding="{Binding Path=Key}" Width="100"/> <!-- <GridViewColumn Header="Value" DisplayMemberBinding="{Binding Path=Value}" Width="250">--> <GridViewColumn Header="Value" Width="250"> <GridViewColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Path=Value}" Foreground="CornflowerBlue" AutomationProperties.Name={Binding Path="Key"}/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> </ListView> The one problem I have is that the AutomationProperties.Name property is not being set. I was checking it with the Coded UI Test Builder and the property is empty. The Text and the Foreground property are being set correctly. Question Does anyone know why AutomationProperties.Name is not being set? Additional information Strangly enough, the following XAML code does set the AutomationProperties.Name <ListView x:Name="listview"> <ListView.Resources> <Style TargetType="TextBlock"> <Setter Property="AutomationProperties.Name" Value="{Binding Key}"/> </Style> </ListView.Resources> <ListView.View> <GridView> <GridViewColumn Header="Property" DisplayMemberBinding="{Binding Path=Key}" Width="100"/> <GridViewColumn Header="Value" DisplayMemberBinding="{Binding Path=Value}" Width="250"/> </GridView> </ListView.View> </ListView> The problem here though is that AutomationProperties.Name is being set on all the columns. But I only want it on the second one because otherwise my Coded UI Test code returns the wrong value (that of the first column, instead of that of the second column which I want).

    Read the article

  • C# Spell checker Problem

    - by reggie
    I've incorporated spell check into my win forms C# project. This is my code. public void CheckSpelling() { try { // declare local variables to track error count // and information int SpellingErrors = 0; string ErrorCountMessage = string.Empty; // create an instance of a word application Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); // hide the MS Word document during the spellcheck //WordApp.WindowState = WdWindowState.wdWindowStateMinimize; // check for zero length content in text area if (this.Text.Length > 0) { WordApp.Visible = false; // create an instance of a word document _Document WordDoc = WordApp.Documents.Add(ref emptyItem, ref emptyItem, ref emptyItem, ref oFalse); // load the content written into the word doc WordDoc.Words.First.InsertBefore(this.Text); // collect errors form new temporary document set to contain // the content of this control Microsoft.Office.Interop.Word.ProofreadingErrors docErrors = WordDoc.SpellingErrors; SpellingErrors = docErrors.Count; // execute spell check; assumes no custom dictionaries WordDoc.CheckSpelling(ref oNothing, ref oIgnoreUpperCase, ref oAlwaysSuggest, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing); // format a string to contain a report of the errors detected ErrorCountMessage = "Spell check complete; errors detected: " + SpellingErrors; // return corrected text to control's text area object first = 0; object last = WordDoc.Characters.Count - 1; this.Text = WordDoc.Range(ref first, ref last).Text; } else { // if nothing was typed into the control, abort and inform user ErrorCountMessage = "Unable to spell check an empty text box."; } WordApp.Quit(ref oFalse, ref emptyItem, ref emptyItem); System.Runtime.InteropServices.Marshal.ReleaseComObject(WordApp); // return report on errors corrected // - could either display from the control or change this to // - return a string which the caller could use as desired. // MessageBox.Show(ErrorCountMessage, "Finished Spelling Check"); } catch (Exception e) { MessageBox.Show(e.ToString()); } } The spell checker works well, the only problem is when I try to move the spell checker the main form blurs up for some reason. Also when I close the spell checker the main form is back to normal. It seems like it is opening up Microsoft word then hiding the window, only allowing the spell checker to be seen. Please help.

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • Webdriver: Tests crash with internet explorer7 with error Modal dialog present

    - by user1207450
    Following tests is automated by using java and selenium-server-standalone-2.20.0.jar. The test crashes with the error: Page title is: cheese! - Google Search Starting browserTest 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - I/O exception (org.apache.http.NoHttpResponseException) caught when processing request: The target server failed to respond 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - Retrying request Exception in thread "main" org.openqa.selenium.UnhandledAlertException: Modal dialog present (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 1.20 seconds Build info: version: '2.20.0', revision: '16008', time: '2012-02-27 19:03:04' System info: os.name: 'Windows XP', os.arch: 'x86', os.version: '5.1', java.version: '1.6.0_24' Driver info: driver.version: InternetExplorerDriver at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:170) at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:129) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:438) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:139) at org.openqa.selenium.ie.InternetExplorerDriver.setup(InternetExplorerDriver.java:91) at org.openqa.selenium.ie.InternetExplorerDriver.<init>(InternetExplorerDriver.java:48) at com.pwc.test.java.InternetExplorer7.browserTest(InternetExplorer7.java:34) at com.pwc.test.java.InternetExplorer7.main(InternetExplorer7.java:27) Test Class: package com.pwc.test.java; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebDriverBackedSelenium; import org.openqa.selenium.WebElement; import org.openqa.selenium.htmlunit.HtmlUnitDriver; import org.openqa.selenium.ie.InternetExplorerDriver; import com.thoughtworks.selenium.Selenium; public class InternetExplorer7 { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub WebDriver webDriver = new HtmlUnitDriver(); webDriver.get("http://www.google.com"); WebElement webElement = webDriver.findElement(By.name("q")); webElement.sendKeys("cheese!"); webElement.submit(); System.out.println("Page title is: "+webDriver.getTitle()); browserTest(); } public static void browserTest() { System.out.println("Starting browserTest"); String baseURL = "http://www.mail.yahoo.com"; WebDriver driver = new InternetExplorerDriver(); driver.get(baseURL); Selenium selenium = new WebDriverBackedSelenium(driver, baseURL); selenium.windowMaximize(); WebElement username = driver.findElement(By.id("username")); WebElement password = driver.findElement(By.id("passwd")); WebElement signInButton = driver.findElement(By.id(".save")); username.sendKeys("myusername"); password.sendKeys("magic"); signInButton.click(); driver.close(); } } I don't see any modal dialog when I launched the IE7/8 browser manually. What could be causing this?

    Read the article

  • Rtti accessing fields and properties in complex data structures

    - by Coco
    As already discussed in Rtti data manipulation and consistency in Delphi 2010 a consistency between the original data and rtti values can be reached by accessing members by using a pair of TRttiField and an instance pointer. This would be very easy in case of a simple class with only basic member types (like e.g. integers or strings). But what if we have structured field types? Here is an example: TIntArray = array [0..1] of Integer; TPointArray = array [0..1] of Point; TExampleClass = class private FPoint : TPoint; FAnotherClass : TAnotherClass; FIntArray : TIntArray; FPointArray : TPointArray; public property Point : TPoint read FPoint write FPoint; //.... and so on end; For an easy access of Members I want to buil a tree of member-nodes, which provides an interface for getting and setting values, getting attributes, serializing/deserializing values and so on. TMemberNode = class private FMember : TRttiMember; FParent : TMemberNode; FInstance : Pointer; public property Value : TValue read GetValue write SetValue; //uses FInstance end; So the most important thing is getting/setting the values, which is done - as stated before - by using the GetValue and SetValue functions of TRttiField. So what is the Instance for FPoint members? Let's say Parent is the Node for TExample class, where the instance is known and the member is a field, then Instance would be: FInstance := Pointer (Integer (Parent.Instance) + TRttiField (FMember).Offset); But what if I want to know the Instance for a record property? There is no offset in this case. So is there a better solution to get a pointer to the data? For the FAnotherClass member, the Instance would be: FInstance := Parent.Value.AsObject; So far the solution works, and data manipulation can be done by using rtti or the original types, without losing information. But things get harder, when working with arrays. Especially the second array of Points. How can I get the instance for the members of points in this case?

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • jquery ajax error cannot find url outside of debug mode

    - by John Orlandella Jr.
    I inherited some code two weeks ago that is using the jquery.ajax method to connect to a .NET web service. Here is the piece of code give me the trouble... if (MSCTour.AppSettings.OFFLINE !== 'TRUE') { $.ajax({ url: url, data: json, type: "POST", contentType: "application/json", timeout: 10000, dataType: "json", // not "json" we'll parse success: function(res){ if (!callback) { return; } /* // *** Use json library so we can fix up MS AJAX dates */ var result = ""; if (res !== "") { try { result = $.evalJSON(res); } catch (e) { result = {}; bare = true; } } /* // *** Bare message IS result */ if (bare) { callback(result); return; } /* // *** Wrapped message contains top level object node // *** strip it off */ for (var property in result) { callback(result[property]); break; } }, error: function(xhr,status,error){ if (status === 'parsererror') {} else {return error;} }, complete: function(res, status){ if (callback) { if ((status != 'success' && status != 'error') || status === 'parsererror' || (status === 'timeout' && res !== '')) { try { result = $.secureEvalJSON(res); } catch (e) { result = {}; bare = true; } callback(res); } } return; } }); } The url variable at this point equals /testsite/service.svc/GetItems Now here is where my problem lies... When running this site out of debug mode through visual studio I am not having any problem connecting to the database through the web service and seeing all my data, for both viewing and updating. When I go through the normal web server for the same site, on the same page, no data is showing up. When I put a break on the error portion of the code above in firebug this is information I am getting in the image linked below. link text I am getting what appears to be a 404 error, but when I look on the server all of the files are in the right place... coupled with the fact that it works when in debug mode, I think I am slowly going crazy staring at these same lines of code trying to find the needle in the haystack. Any help or just a direction to look in would be greatly appreciated.

    Read the article

  • nhibernate/fluenthibernate throws StackOverflowException

    - by Gianluca Colucci
    Hi there! In my project I am using NHibernate/FluentNHibernate, and I am working with two entities, contracts and services. This is my contract type: [Serializable] public partial class TTLCContract { public virtual long? Id { get; set; } // other properties here public virtual Iesi.Collections.Generic.ISet<TTLCService> Services { get; set; } // implementation of Equals // and GetHashCode here } and this is my service type: [Serializable] public partial class TTLCService { public virtual long? Id { get; set; } // other properties here public virtual Activity.Models.TTLCContract Contract { get; set; } // implementation of Equals // and GetHashCode here } Ok, so as you can see, I want my contract object to have many services, and each Service needs to have a reference to the parent Contract. I am using FluentNhibernate. So my mappings file are the following: public TTLCContractMapping() { Table("tab_tlc_contracts"); Id(x => x.Id, "tlc_contract_id"); HasMany(x => x.Services) .Inverse() .Cascade.All() .KeyColumn("tlc_contract_id") .AsSet(); } and public TTLCServiceMapping() { Table("tab_tlc_services"); Id(x => x.Id, "tlc_service_id"); References(x => x.Contract) .Not.Nullable() .Column("tlc_contract_id"); } and here comes my problem: if I retrieve the list of all contracts in the db, it works. if I retrieve the list of all services in a given contract, I get a StackOverflowException.... Do you see anything wrong with what I wrote? Have I made any mistake? Please let me know if you need any additional information. Oh yes, I missed to say... looking at the stacktrace I see the system is loading all the services and then it is loading again the contracts related to those services. I don't really have the necessary experience nor ideas anymore to understand what's going on.. so any help would be really really great! Thanks in advance, Cheers, Gianluca.

    Read the article

  • Correct way of using/testing event service in Eclipse E4 RCP

    - by Thorsten Beck
    Allow me to pose two coupled questions that might boil down to one about good application design ;-) What is the best practice for using event based communication in an e4 RCP application? How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ? Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events: @Inject static IEventBroker broker; private void sendEvent() { broker.post(MyEventConstants.SOME_EVENT, payload) } On the receiver side, I have a method like: @Inject @Optional private void receiveEvent(@UIEventTopic(MyEventConstants.SOME_EVENT) Object payload) Now the questions: In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context); This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application? how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker? This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach. This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved. All this makes me wonder whether I am still on a "good path" or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • while creating archetype getting following error

    - by munna
    D:\Training\workspace\vppsourcemvn archetype:generate -B -DarchetypeGroupId=org .appfuse.archetypes -DarchetypeArtifactId=appfuse-modular-struts-archetype -Darc hetypeVersion=2.1.0-M1 -DgroupId=com.vmware -DartifactId=vpp [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [WARNING] Error reading archetype catalog http://repo1.maven.org/maven2 org.apache.maven.wagon.TransferFailedException: Error transferring file: Connect ion timed out: connect at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:143) at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:11 6) at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88) at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61) at org.apache.maven.archetype.source.RemoteCatalogArchetypeDataSource.ge tArchetypeCatalog(RemoteCatalogArchetypeDataSource.java:97) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:195) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:184) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.getArchetypesB yCatalog(DefaultArchetypeSelector.java:278) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.selectArchetyp e(DefaultArchetypeSelector.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execu te(CreateProjectFromArchetypeMojo.java:186) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:860) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:801) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:726) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:1049) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373 ) at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:115) ... 28 more [WARNING] No archetype found in Remote catalog. Defaulting to internal Catalog [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 46 seconds [INFO] Finished at: Wed Jun 09 16:11:07 IST 2010 [INFO] Final Memory: 11M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Maven building for GoogleAppEngine, forced to include JDO libraries?

    - by James.Elsey
    Hi, I'm trying to build my application for GoogleAppEngine using maven. I've added the following to my pom which should "enhance" my classes after building, as suggested on the DataNucleus documentation <plugin> <groupId>org.datanucleus</groupId> <artifactId>maven-datanucleus-plugin</artifactId> <version>1.1.4</version> <configuration> <log4jConfiguration>${basedir}/log4j.properties</log4jConfiguration> <verbose>true</verbose> </configuration> <executions> <execution> <phase>process-classes</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> </plugin> According to the documentation on GoogleAppEngine, you have the choice to use JDO or JPA, I've chosen to use JPA since I have used it in the past. When I try to build my project (before I upload to GAE) using mvn clean package I get the following output [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) javax.jdo:jdo2-api:jar:2.3-ec Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 2) javax.jdo:jdo2-api:jar:2.3-ec ---------- 1 required artifact is missing. for artifact: org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 from the specified remote repositories: __jpp_repo__ (file:///usr/share/maven2/repository), DN_M2_Repo (http://www.datanucleus.org/downloads/maven2/), central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Sat Apr 03 16:02:39 BST 2010 [INFO] Final Memory: 31M/258M [INFO] ------------------------------------------------------------------------ Any ideas why I should get such an error? I've searched through my entire source code and I'm not referencing JDO anywhere, so unless the app engine libraries require it, I'm not sure why I get this message.

    Read the article

  • Mercurial for Beginners: The Definitive Practical Guide

    - by Laz
    Inspired by Git for beginners: The definitive practical guide. This is a compilation of information on using Mercurial for beginners for practical use. Beginner - a programmer who has touched source control without understanding it very well. Practical - covering situations that the majority of users often encounter - creating a repository, branching, merging, pulling/pushing from/to a remote repository, etc. Notes: Explain how to get something done rather than how something is implemented. Deal with one question per answer. Answer clearly and as concisely as possible. Edit/extend an existing answer rather than create a new answer on the same topic. Please provide a link to the the Mercurial wiki or the HG Book for people who want to learn more. Questions: Installation/Setup How to install Mercurial? How to set up Mercurial? How do you create a new project/repository? How do you configure it to ignore files? Working with the code How do you get the latest code? How do you check out code? How do you commit changes? How do you see what's uncommitted, or the status of your current codebase? How do you destroy unwanted commits? How do you compare two revisions of a file, or your current file and a previous revision? How do you see the history of revisions to a file? How do you handle binary files (visio docs, for instance, or compiler environments)? How do you merge files changed at the "same time"? Tagging, branching, releases, baselines How do you 'mark' 'tag' or 'release' a particular set of revisions for a particular set of files so you can always pull that one later? How do you pull a particular 'release'? How do you branch? How do you merge branches? How do you merge parts of one branch into another branch? Other Good GUI/IDE plugin for Mercurial? Advantages/disadvantages? Any other common tasks a beginner should know? How do I interface with Subversion? Other Mercurial references Mercurial: The Definitive Guide Mercurial Wiki Meet Mercurial | Peepcode Screencast

    Read the article

  • WCF (REST) multiple host headers with one endpoint

    - by Maan
    I have an issue with a WCF REST service (.NET 4) which has multiple host headers, but one end point. The host headers are for example: xxx.yyy.net xxx.yyy.com Both host headers are configured in IIS over HTTPS and redirect to the same WCF service endpoint. I have an Error Handling behavior which logs some extra information in case of an error. The problem is that the logging behavior only works for one of both URLs. When I first call the .net URL, the logging is only working for requests on the .net URL. When I first call the .com URL (after a Worker Process recycle), it’s only working on requests on the .com URL. The configuration looks like this: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="XXX.RemoteHostService"> <endpoint address="" behaviorConfiguration="RemoteHostEndPointBehavior" binding="webHttpBinding" bindingConfiguration="HTTPSTransport" contract="XXX.IRemoteHostService" /> </service> </services> <extensions> <behaviorExtensions> <add name="errorHandling" type="XXX.ErrorHandling.ErrorHandlerBehavior, XXX.Services, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <bindings> <webHttpBinding> <binding name="HTTPSTransport"> <security mode="Transport"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="RemoteHostEndPointBehavior"> <webHttp /> <errorHandling /> </behavior> </endpointBehaviors> </behaviors> …. Should I configure multiple endpoints? Or in which way could I configure the WCF Service so the logging behavior is working for both URLs? I tried several things, also solutions mentioned earlier on StackOverflow. But no luck until now...

    Read the article

  • Translate Java class with static attributes and Annotation to Scala equivalent

    - by ifischer
    I'm currently trying to "translate" the following Java class to an equivalent Scala class. It's part of a JavaEE6-application and i need it to use the JPA2 MetaModel. import javax.persistence.metamodel.SingularAttribute; import javax.persistence.metamodel.StaticMetamodel; @StaticMetamodel(Person.class) public class Person_ { public static volatile SingularAttribute<Person, String> name; } A dissassembling of the compiled class file reveals the following information for the compiled file: > javap Person_.class : public class model.Person_ extends java.lang.Object{ public static volatile javax.persistence.metamodel.SingularAttribute name; public model.Person_(); } So now i need an equivalent Scala file that has the same structure, as JPA depends on it, cause it resolves the attributes by reflection to make them accessible at runtime. So the main problem i think is that the attribute is static, but the Annotation has to be on an (Java)Object (i guess) My first naive attempt to create a Scala equivalent is the following: @StaticMetamodel(classOf[Person]) class Person_ object Person_ { @volatile var name:SingularAttribute[Person, String] = _; } But the resulting classfile is far away from the Java one, so it doesn't work. When trying to access the attributes at runtime, e.g. "Person_.firstname", it resolves to null, i think JPA can't do the right reflection magic on the compiled classfile (the Java variant resolves to an instance of org.hibernate.ejb.metamodel.SingularAttributeImpl at runtime). > javap Person_.class : public class model.Person_ extends java.lang.Object implements scala.ScalaObject{ public static final void name_$eq(javax.persistence.metamodel.SingularAttribute); public static final javax.persistence.metamodel.SingularAttribute name(); public model.Person_(); } > javap Person_$.class : public final class model.Person__$ extends java.lang.Object implements scala.ScalaObject public static final model.Person__$ MODULE$; public static {}; public javax.persistence.metamodel.SingularAttribute name(); public void name_$eq(javax.persistence.metamodel.SingularAttribute); } So now what i'd like to know is if it's possible at all to create a Scala equivalent of the Java class? It seems to me that it's absolutely not, but maybe there is a workaround or something (except just using Java, but i want my app to be in Scala where possible) Any ideas, anyone? Thanks in advance!

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Configuring Hibernate logging using Log4j XML config file?

    - by James McMahon
    I haven't been able to find any documentation on how to configure Hibernate's logging using the XML style configuration file for Log4j. Is this even possible or do I have use a properties style configuration file to control Hibernate's logging? If anyone has any information or links to documentation it would appreciated. EDIT: Just to clarify, I am looking for example of the actual XML syntax to control Hibernate. EDIT2: Here is what I have in my XML config file. <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Threshold" value="info"/> <param name="Target" value="System.out"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} [%t] %-5p %c{1} - %m%n"/> </layout> </appender> <appender name="rolling-file" class="org.apache.log4j.RollingFileAppender"> <param name="file" value="Program-Name.log"/> <param name="MaxFileSize" value="1000KB"/> <!-- Keep one backup file --> <param name="MaxBackupIndex" value="4"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %l - %m%n"/> </layout> </appender> <root> <priority value ="debug" /> <appender-ref ref="console" /> <appender-ref ref="rolling-file" /> </root> </log4j:configuration> Logging works fine but I am looking for a way to step down and control the hibernate logging in way that separate from my application level logging, as it currently is flooding my logs. I have found examples of using the preference file to do this, I was just wondering how I can do this in a XML file.

    Read the article

< Previous Page | 918 919 920 921 922 923 924 925 926 927 928 929  | Next Page >