Search Results

Search found 15408 results on 617 pages for 'import module'.

Page 583/617 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Properly Establishing an ApplicationEndpoint in UCMA 3.0

    - by user570720
    I've been struggling with getting an application endpoint working on UCMA 3.0. I am trying to run an application on a server separate from the Lync server which uses a registered ApplicationEndpoint to monitor presence and act as a bot which can send other users messages. I used to have my code working with a UserEndpoint (which was fine for monitoring presence), but did not have the capabilities to send IMs to other Lync users. After searching the web, I'm finally at the point where I'm getting this error when running my code: System.ArgumentException was unhandled Message=An ApplicationEndpoint can be registered only if proxy and Multual Tls have been specified. Source=Microsoft.Rtc.Collaboration StackTrace: at Microsoft.Rtc.Collaboration.ApplicationEndpoint..ctor(CollaborationPlatform platform, ApplicationEndpointSettings settings) at Waldo.endpointHelper.CreateApplicationEndpoint(ApplicationEndpointSettings applicationEndpointSettings) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\endpointHelper.cs:line 117 at Waldo.endpointHelper.CreateEstablishedApplicationEndpoint(String endpointFriendlyName) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\endpointHelper.cs:line 228 at Waldo.waldoGrabPresence.Run() in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\waldoGrabPresence.cs:line 60 at Waldo.waldoGrabPresence.Main(String[] args) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\waldoGrabPresence.cs:line 42 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: After some searching, I followed the instructions here: http://blogs.claritycon.com/blogs/michael_greenlee/archive/2009/03/21/installing-a-certificate-for-ucma-v2-0-applications.aspx to import a certificate onto the server that I'm trying to run the application on, but to no avail. So at this point, I think that there must be something wrong with how I'm setting up the ApplicationEndpointSettings, CollaberationPlatform or ApplicationEndpoint objects. Here's how I'm doing it: ApplicationEndpointSettings settings = new ApplicationEndpointSettings(_ownerURIPrompt, _serverFQDNPrompt, _trustedPortPrompt); ServerPlatformSettings settings = new ServerPlatformSettings(null, _serverFQDNPrompt, _trustedPortPrompt, _trustedApplicationGRUU); _collabPlatform = new CollaborationPlatform(settings); _applicationEndpoint = new ApplicationEndpoint(_collabPlatform, applicationEndpointSettings); Does anyone see any problems with what I'm doing? Or, better yet, does anyone know of a blog that walks you through establishing an application endpoint in the situation I'm in? I work really well with tutorials or samples, but have not found one that seems to accomplish what I'm trying to do. Thanks for the help!

    Read the article

  • Where is the method call in the EXE file?

    - by Victor Hurdugaci
    Introduction After watching this video from LIDNUG, about .NET code protection http://secureteam.net/lidnug_recording/Untitled.swf (especially from 46:30 to 57:30), I would to locate the call to a MessageBox.Show in an EXE I created. The only logic in my "TrialApp.exe" is: public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { MessageBox.Show("This is trial app"); } } Compiled on the Release configuration: http://rapidshare.com/files/392503054/TrialApp.exe.html What I do to locate the call Run the application in WinDBG and break after the message box appears. Get the CLR stack with !clrstack: 0040e840 5e21350b [InlinedCallFrame: 0040e840] System.Windows.Forms.SafeNativeMethods.MessageBox(System.Runtime.InteropServices.HandleRef, System.String, System.String, Int32) 0040e894 5e21350b System.Windows.Forms.MessageBox.ShowCore(System.Windows.Forms.IWin32Window, System.String, System.String, System.Windows.Forms.MessageBoxButtons, System.Windows.Forms.MessageBoxIcon, System.Windows.Forms.MessageBoxDefaultButton, System.Windows.Forms.MessageBoxOptions, Boolean) 0040e898 002701f0 [InlinedCallFrame: 0040e898] 0040e934 002701f0 TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Get the MethodDesc structure (using the address of Form1_Load) !ip2md 002701f0 MethodDesc: 001762f8 Method Name: TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Class: 00171678 MethodTable: 00176354 mdToken: 06000005 Module: 00172e9c IsJitted: yes CodeAddr: 002701d0 Transparency: Critical Source file: D:\temp\TrialApp\TrialApp\Form1.cs @ 22 Dump the IL of this method (by MethodDesc) !dumpil 001762f8 IL_0000: ldstr "This is trial app" IL_0005: call System.Windows.Forms.MessageBox::Show IL_000a: pop IL_000b: ret So, as the video mentioned, the call to to Show is 5 bytes from the beginning of the method implementation. Now I open CFFExplorer (just like in the video) and get the RVA of the Form1_Load method: 00002083. After this, I go to Address Converter (again in CFF Explorer) and navigate to offset 00002083. There we have: 32 72 01 00 00 70 28 16 00 00 0A 26 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 In the video is mentioned that the first 12 bytes are for the method header so I skip them 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 5 bytes from the beginning of the implementation should be the opcode for method call (28). Unfortunately, is not there. 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 Questions: What am I doing wrong? Why there is no method call at that position in the file? Or maybe the video is missing some information... Why the guy in that video replaces the call with 9 zeros?

    Read the article

  • How to store date into Mysql database with play framework in scala?

    - by Rahul Kulhari
    I am working with play framework with scala and what am i doing : login page to login into web app sign up page to register into web app after login i want to store all databases values to user what i want to do: when user register for web app then i want to store user values into database with current time and date but my form is giving error. error: List(FormError(dates,error.required,List())),None) controllers/Application.scala object Application extends Controller { val ta:Form[Keyword] = Form( mapping( "id" -> ignored(NotAssigned:Pk[Long]), "word" -> nonEmptyText, "blog" -> nonEmptyText, "cat" -> nonEmptyText, "score"-> of[Long], "summaryId"-> nonEmptyText, "dates" -> date("yyyy-MM-dd HH:mm:ss") )(Keyword.apply)(Keyword.unapply) ) def index = Action { Ok(html.index(ta)); } def newTask= Action { implicit request => ta.bindFromRequest.fold( errors => {println(errors) BadRequest(html.index(errors))}, keywo => { Keyword.create(keywo) Ok(views.html.data(Keyword.all())) } ) } models/keyword.scala case class Keyword(id: Pk[Long],word: String,blog: String,cat: String,score: Long, summaryId: String,dates: Date ) object Keyword { val keyw = { get[Pk[Long]]("keyword.id") ~ get[String]("keyword.word")~ get[String]("keyword.blog")~ get[String]("keyword.cat")~ get[Long]("keyword.score") ~ get[String]("keyword.summaryId")~ get[Date]("keyword.dates") map { case id~blog~cat~word~score~summaryId~dates => Keyword(id,word,blog,cat,score, summaryId,dates) } } def all(): List[Keyword] = DB.withConnection { implicit c => SQL("select * from keyword").as(Keyword.keyw *) } def create(key: Keyword){DB.withConnection{implicit c=> SQL("insert into keyword values({word},{blog}, {cat}, {score},{summaryId},{dates})").on('word-> key.word,'blog->key.blog, 'cat -> key.cat, 'score-> key.score, 'summaryId -> key.summaryId, 'dates->new Date()).executeUpdate } } views/index.scala.html @(taskForm: Form[Keyword]) @import helper._ @main("Todo list") { @form(routes.Application.newTask) { @inputText(taskForm("word")) @inputText(taskForm("blog")) @inputText(taskForm("cat")) @inputText(taskForm("score")) @inputText(taskForm("summaryId")) <input type="submit"> <a href="">Go Back</a> } } please give me some idea to store date into mysql databse and date is not a field of form

    Read the article

  • Exception in thread "main" java.lang.StackOverflowError

    - by Ray.R.Chua
    I have a piece of code and I could not figure out why it is giving me Exception in thread "main" java.lang.StackOverflowError. This is the question: Given a positive integer n, prints out the sum of the lengths of the Syracuse sequence starting in the range of 1 to n inclusive. So, for example, the call: lengths(3) will return the the combined length of the sequences: 1 2 1 3 10 5 16 8 4 2 1 which is the value: 11. lengths must throw an IllegalArgumentException if its input value is less than one. My Code: import java.util.HashMap; public class Test { HashMap<Integer,Integer> syraSumHashTable = new HashMap<Integer,Integer>(); public Test(){ } public int lengths(int n)throws IllegalArgumentException{ int sum =0; if(n < 1){ throw new IllegalArgumentException("Error!! Invalid Input!"); } else{ for(int i =1; i<=n;i++){ if(syraSumHashTable.get(i)==null) { syraSumHashTable.put(i, printSyra(i,1)); sum += (Integer)syraSumHashTable.get(i); } else{ sum += (Integer)syraSumHashTable.get(i); } } return sum; } } private int printSyra(int num, int count){ int n = num; if(n == 1){ return count; } else{ if(n%2==0){ return printSyra(n/2, ++count); } else{ return printSyra((n*3)+1, ++count) ; } } } } Driver code: public static void main(String[] args) { // TODO Auto-generated method stub Test s1 = new Test(); System.out.println(s1.lengths(90090249)); //System.out.println(s1.lengths(5)); } . I know the problem lies with the recursion. The error does not occur if the input is a small value, example: 5. But when the number is huge, like 90090249, I got the Exception in thread "main" java.lang.StackOverflowError. Thanks all for your help. :) I almost forgot the error msg: Exception in thread "main" java.lang.StackOverflowError at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60)

    Read the article

  • SWIG & Java Use of carrays.i and array_functions for C Array of Strings

    - by c12
    I have the below configuration where I'm trying to create a test C function that returns a pointer to an Array of Strings and then wrap that using SWIG's carrays.i and array_functions so that I can access the Array elements in Java. Uncertainties: %array_functions(char, SWIGArrayUtility); - not sure if char is correct inline char *getCharArray() - not sure if C function signature is correct String result = getCharArray(); - String return seems odd, but that's what is generated by SWIG SWIG.i: %module Test %{ #include "test.h" %} %include <carrays.i> %array_functions(char, SWIGArrayUtility); %include "test.h" %pragma(java) modulecode=%{ public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); } return ret; } %} Inline Header C Function: #ifndef TEST_H #define TEST_H inline static unsigned short numFoo() { return 3; } inline char *getCharArray(){ static char* foo[3]; foo[0]="ABC"; foo[1]="5CDE"; foo[2]="EEE6"; return foo; } #endif Java Main Tester: public class TestMain { public static void main(String[] args) { System.loadLibrary("TestJni"); char[] test = Test.getCharArrayImpl(); System.out.println("length=" + test.length); for(int i=0; i < test.length; i++){ System.out.println(test[i]); } } } Java Main Tester Output: length=3 ? ? , SWIG Generated Java APIs: public class Test { public static String new_SWIGArrayUtility(int nelements) { return TestJNI.new_SWIGArrayUtility(nelements); } public static void delete_SWIGArrayUtility(String ary) { TestJNI.delete_SWIGArrayUtility(ary); } public static char SWIGArrayUtility_getitem(String ary, int index) { return TestJNI.SWIGArrayUtility_getitem(ary, index); } public static void SWIGArrayUtility_setitem(String ary, int index, char value) { TestJNI.SWIGArrayUtility_setitem(ary, index, value); } public static int numFoo() { return TestJNI.numFoo(); } public static String getCharArray() { return TestJNI.getCharArray(); } public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); System.out.println("result=" + result); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); System.out.println("ret[" + i + "]=" + ret[i]); } return ret; } }

    Read the article

  • NullPointerException using datanucleus-json with S3

    - by Matt
    I'm using datanucleus 3.2.7 from Maven, trying to use the Amazon S3 JPA provider. I can successfully write data into S3, but querying either by using "SELECT u FROM User u" or "SELECT u FROM User u WHERE id = :id" causes a NullPointerException to be thrown. Using the RDBMS provider, everything works perfectly. Is there something I'm doing wrong? Main.java EntityManagerFactory factory = Persistence.createEntityManagerFactory("MyUnit"); EntityManager entityManager = factory.createEntityManager(); Query query = entityManager.createQuery("SELECT u FROM User u", User.class); List<User> users = query.getResultList(); // Null pointer exception here for(User u:users) System.out.println(u); User.java package test; import javax.persistence.*; @Entity @Table(name = "User") public class User { @Id public String id; public String name; public User(String id, String name) { this.id = id; this.name = name; } public String toString() { return id+" : "+name; } } persistence.xml <?xml version="1.0" encoding="UTF-8" ?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="MyUnit"> <class>test.User</class> <exclude-unlisted-classes /> <properties> <properties> <property name="datanucleus.ConnectionURL" value="amazons3:http://s3.amazonaws.com/" /> <property name="datanucleus.ConnectionUserName" value="xxxxx" /> <property name="datanucleus.ConnectionPassword" value="xxxxx" /> <property name="datanucleus.cloud.storage.bucket" value="my-bucket" /> </properties> <property name="datanucleus.autoCreateSchema" value="true" /> </properties> </persistence-unit> </persistence> Exception java.lang.NullPointerException at org.datanucleus.NucleusContext.isClassWithIdentityCacheable(NucleusContext.java:1840) at org.datanucleus.ExecutionContextImpl.getObjectFromLevel2Cache(ExecutionContextImpl.java:5287) at org.datanucleus.ExecutionContextImpl.getObjectFromCache(ExecutionContextImpl.java:5191) at org.datanucleus.ExecutionContextImpl.findObject(ExecutionContextImpl.java:3137) at org.datanucleus.store.json.CloudStoragePersistenceHandler.getObjectsOfCandidateType(CloudStoragePersistenceHandler.java:367) at org.datanucleus.store.json.query.JPQLQuery.performExecute(JPQLQuery.java:94) at org.datanucleus.store.query.Query.executeQuery(Query.java:1786) at org.datanucleus.store.query.Query.executeWithMap(Query.java:1690) at org.datanucleus.api.jpa.JPAQuery.getResultList(JPAQuery.java:194) at test.Main.main(Main.java:16)

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • Python4Delphi: Returning a python object in a function. (DelphiWrapper)

    - by Gabriel Fonseca
    I am using python4delphi. ow can I return an object from a wrapped Delphi class function? Code Snippet: I have a simple Delphi Class that i wrapped to Python Script, right? TSimple = Class Private function getvar1:string; Public Published property var1:string read getVar1; function getObj:TSimple; end; ... function TSimple.getVar1:string; begin result:='hello'; end; function TSimple.getObj:TSimple; begin result:=self; end; I made the TPySimple like the demo32 to give class access to the python code. My python module name is test. TPyDado = class(TPyDelphiPersistent) // Constructors & Destructors constructor Create( APythonType : TPythonType ); override; constructor CreateWith( PythonType : TPythonType; args : PPyObject ); override; // Basic services function Repr : PPyObject; override; class function DelphiObjectClass : TClass; override; end; ... { TPyDado } constructor TPyDado.Create(APythonType: TPythonType); begin inherited; // we need to set DelphiObject property DelphiObject := TDado.Create; with TDado(DelphiObject) do begin end; Owned := True; // We own the objects we create end; constructor TPyDado.CreateWith(PythonType: TPythonType; args: PPyObject); begin inherited; with GetPythonEngine, DelphiObject as TDado do begin if PyArg_ParseTuple( args, ':CreateDado' ) = 0 then Exit; end; end; class function TPyDado.DelphiObjectClass: TClass; begin Result := TDado; end; function TPyDado.Repr: PPyObject; begin with GetPythonEngine, DelphiObject as TDado do Result := VariantAsPyObject(Format('',[])); // or Result := PyString_FromString( PAnsiChar(Format('(%d, %d)',[x, y])) ); end; And now the python code: import test a = test.Simple() # try access the property var1 and everything is right print a.var1 # work's, but.. b = a.getObj(); # raise a exception that not find any attributes named getObj. # if the function returns a string for example, it's work.

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • Injecting jQuery into a page fails when using Google AJAX Libraries API

    - by jakemcgraw
    I'd like to inject jQuery into a page using the Google AJAX Libraries API, I've come up with the following solution: http://my-domain.com/inject-jquery.js: ;((function(){ // Call this function once jQuery is available var func = function() { jQuery("body").prepend('<div>jQuery Rocks!</div>'); }; // Detect if page is already using jQuery if (!window.jQuery) { var done = false; var head = document.getElementsByTagName('head')[0]; var script = document.createElement("script"); script.src = "http://www.google.com/jsapi"; script.onload = script.onreadystatechange = function(){ // Once Google AJAX Libraries API is loaded ... if (!done && (!this.readyState || this.readyState == "loaded" || this.readyState == "complete")) { done = true; // ... load jQuery ... window.google.load("jquery", "1", {callback:function(){ jQuery.noConflict(); // ... jQuery available, fire function. func(); }}); // Prevent IE memory leaking script.onload = script.onreadystatechange = null; head.removeChild(script); } } // Load Google AJAX Libraries API head.appendChild(script); // Page already using jQuery, fire function } else { func(); } })()); The script would then be included in a page on a separate domain: http://some-other-domain.com/page.html: <html> <head> <title>This is my page</title> </head> <body> <h1>This is my page.</h1> <script src="http://my-domain.com/inject-jquery.js"></script> </body> </html> In Firefox 3 I get the following error: Module: 'jquery' must be loaded before DOM onLoad! jsapi (line 16) The error appears to be specific to the Google AJAX Libraries API, as I've seen others use a jQuery bookmarklet to inject jQuery into the current page. My question: Is there a method for injecting the Google AJAX Libraries API / jQuery into a page regardless of the onload/onready state?

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • itextsharp PdfCopy and landscape pages

    - by Andreas Rehm
    I'm using itextsharp to join mutiple pdf documents and add a footer. My code works fine - except for landscape pages - it isn't detecting the page rotation - the footer is not centerd for landscape: public static int AddPagesFromStream(Document document, PdfCopy pdfCopy, Stream m, bool addFooter, int detailPages, string footer, int footerPageNumOffset, int numPages, string pageLangString, string printLangString) { CreateFont(); try { m.Seek(0, SeekOrigin.Begin); var reader = new PdfReader(m); // get page count var pdfPages = reader.NumberOfPages; var i = 0; // add pages while (i < pdfPages) { i++; // import page with pdfcopy var page = pdfCopy.GetImportedPage(reader, i); // get page center float posX; float posY; var rotation = page.BoundingBox.Rotation; if (rotation == 0 || rotation == 180) { posX = page.Width / 2; posY = 0; } else { posX = page.Height / 2; posY = 20f; } var ps = pdfCopy.CreatePageStamp(page); var cb = ps.GetOverContent(); // add footer cb.SetColorFill(BaseColor.WHITE); var gs1 = new PdfGState {FillOpacity = 0.8f}; cb.SetGState(gs1); cb.Rectangle(0, 0, document.PageSize.Width, 46f + posY); cb.Fill(); // Text cb.SetColorFill(BaseColor.BLACK); cb.SetFontAndSize(baseFont, 7); cb.BeginText(); // create text var pages = string.Format(pageLangString, i + footerPageNumOffset, numPages); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, printLangString, posX, 40f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, footer, posX, 28f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, pages, posX, 20f + posY, 0f); cb.EndText(); ps.AlterContents(); // add page to new pdf pdfCopy.AddPage(page); } // close PdfReader reader.Close(); // return number of pages return i; } catch (Exception e) { Console.WriteLine(e); return 0; } } How do I detect the page rotation (e.g. landscape) format in this case? The given example works for PdfReader but not for PdfCopy. Edit: Why do I need PdfCopy? I tried copying a word pdf export. Some word hyperlinks will not work when you try to copy pages with PdfReader. Only PdfCopy transfers all needed page informations. Edit: (SOLVED) You need to use reader.GetPageRotation(i);

    Read the article

  • Help with Arrays in Objective C.

    - by NJTechie
    Problem : Take an integer as input and print out number equivalents of each number from input. I hacked my thoughts to work in this case but I know it is not an efficient solution. For instance : 110 Should give the following o/p : one one zero Could someone throw light on effective usage of Arrays for this problem? #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int input, i=0, j,k, checkit; int temp[i]; NSLog(@"Enter an integer :"); scanf("%d", &input); checkit = input; while(input > 0) { temp[i] = input%10; input = input/10; i++; } if(checkit != 0) { for(j=i-1;j>=0;j--) { //NSLog(@" %d", temp[j]); k = temp[j]; //NSLog(@" %d", k); switch (k) { case 0: NSLog(@"zero"); break; case 1: NSLog(@"one"); break; case 2: NSLog(@"two"); break; case 3: NSLog(@"three"); break; case 4: NSLog(@"four"); break; case 5: NSLog(@"five"); break; case 6: NSLog(@"six"); break; case 7: NSLog(@"seven"); break; case 8: NSLog(@"eight"); break; case 9: NSLog(@"nine"); break; default: break; } } } else NSLog(@"zero"); [pool drain]; return 0; }

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • Django Multi-Table Inheritance VS Specifying Explicit OneToOne Relationship in Models

    - by chefsmart
    Hope all this makes sense :) I'll clarify via comments if necessary. Also, I am experimenting using bold text in this question, and will edit it out if I (or you) find it distracting. With that out of the way... Using django.contrib.auth gives us User and Group, among other useful things that I can't do without (like basic messaging). In my app I have several different types of users. A user can be of only one type. That would easily be handled by groups, with a little extra care. However, these different users are related to each other in hierarchies / relationships. Let's take a look at these users: - Principals - "top level" users Administrators - each administrator reports to a Principal Coordinators - each coordinator reports to an Administrator Apart from these there are other user types that are not directly related, but may get related later on. For example, "Company" is another type of user, and can have various "Products", and products may be supervised by a "Coordinator". "Buyer" is another kind of user that may buy products. Now all these users have various other attributes, some of which are common to all types of users and some of which are distinct only to one user type. For example, all types of users have to have an address. On the other hand, only the Principal user belongs to a "BranchOffice". Another point, which was stated above, is that a User can only ever be of one type. The app also needs to keep track of who created and/or modified Principals, Administrators, Coordinators, Companies, Products etc. (So that's two more links to the User model.) In this scenario, is it a good idea to use Django's multi-table inheritance as follows: - from django.contrib.auth.models import User class Principal(User): # # # branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Or should I go about doing it like this: - class Principal(models.Model): # # # user = models.OneToOneField(User, blank=True) branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Please keep in mind that there are other user types that are related via foreign keys, for example: - class Administrator(models.Model): # # # principal = models.ForeignKey(Principal, help_text="The supervising principal for this Administrator") user = models.OneToOneField(User, blank=True) province = models.ForeignKey( Province) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratorcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratormodifier") I am aware that Django does use a one-to-one relationship for multi-table inheritance behind the scenes. I am just not qualified enough to decide which is a more sound approach.

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • Still confuse parse JSON in GWT

    - by graybow
    Please help meee. I create a project named 'tesdb3' in eclipse. I create the PHP side to access the database, and made the output as JSON.. I create the userdata.php in folder war. then I compile tesdb3 project. Folder tesdb3 and the userdata.php in war moved in local server(I use WAMP). I put the PHP in folder tesdb3. This is the result from my localhost/phpmyadmin/tesdb3/userdata.php [{"kode":"002","nama":"bambang gentolet"},{"kode":"012","nama":"Algiz"}] From that result I think the PHP side was working good.Then I create UserData.java as JSNI overlay like this: package com.tesdb3.client; import com.google.gwt.core.client.JavaScriptObject; class UserData extends JavaScriptObject{ protected UserData() {} public final native String getKode() /*-{ return this.kode; }-*/; public final native String getNama() /*-{ return this.nama; }-*/; public final String getFullData() { return getKode() + ":" + getNama(); } } Then Finally in the tesdb3.java: public class Tesdb3 implements EntryPoint { String url= "http://localhost/phpmyadmin/tesdb3/datauser.php"; private native JsArray<UserData> getuserdata(String json) /*-{ return eval(json); }-*/; public void LoadData() throws RequestException{ RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(url)); builder.sendRequest(null, new RequestCallback(){ @Override public void onError(Request request, Throwable exception) { Window.alert("error " + exception); } public void onResponseReceived(Request request, Response response) { Window.alert("betul" + response.getText()); //data(getuserdata(response.getText())); } }); } public void data(JsArray<UserData> data){ for (int i = 0; i < data.length(); i++) { String lkode =data.get(i).getKode(); String lname =data.get(i).getNama(); Label l = new Label(lkode+" "+lname); tb.setWidget(i, 0, l); } RootPanel.get().add(new HTML("my data")); RootPanel.get().add(tb); } public void onModuleLoad() { try { LoadData(); } catch (RequestException e) { } } } The result just showing string "my data". And the Window.alert(response.getText()) showing nothing. Whyy?

    Read the article

  • Getting Started with Maven + Jaxb project + IntellijIdea

    - by Em Ae
    I am complete new to IntellijIdea and i am looking for some step-by-step process to set up a basic project. My project depends on Maven + Jaxb classes so i need a Maven project so that when i compile this project, the JAXB Objects are generated by Maven plugins. Now i started like this I created a blank project say MaJa project Added Maven Module to it Added following settings in POM.XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>MaJa</groupId> <artifactId>MaJa</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resource/api/MaJa</schemaDirectory> <packageName>com.rimt.shopping.api.web.ws.v1.model</packageName> <outputDirectory>${build.directory}</outputDirectory> </configuration> </plugin> </plugins> </build> </project> First of all, is it right settings ? I tried clicking on Make/Compile 'MaJa' from Project Right Click Menu and it didn't do anything. I will be looking forward to yoru replies.

    Read the article

  • embedded Italic, bold fonts don't look the same in flex as in Windows...

    - by Mark
    ...unless they're something like "Times New Roman" or some other established font with a fully designed italic and bold, presumably in seperate files. Let me explain what I mean (though why no one has commented on this before I have no idea.) Numerous, numerous fonts do not have a seperate file for italic and bold, and in fact to the best of my knowledge don't even have italic and bold defined as such. But if you install them on windows (for example) and then use them in an app, You can still make use of italic and bold with those fonts. For italic, and oblique angle is just given to it, presumably by Windows, and it looks the same in all Windows apps, and the bold is just given a heavier weight. OK, well here's the problem: if you embed a font like that in a Flex app, as a "SystemFont" the italic and bold will not look the same as they do in Windows. Specifically, the oblique angle is invariably much less than in Windows (i.e the italic slant is much less) and the bold version is not bold enough. I vaguely recall thinking that there was some flex mechanism to assign custom oblique angles for italic (and weight for bold) but now can't recall what it is. Does anyone know the correct established way to do this. The following is actually a seperate (but related) font question (in case anyone is expert in all this.) Its rather a lengthy question and can be skipped, but its something that's plagued me for a long time. I mention above embedding as a "SystemFont", so iow something like this: package fonts { import flash.display.Sprite; public class FLW_Script_I extends Sprite { [Embed(systemFont='FLW Script', fontName='FLW Script', fontStyle='italic', fntWeight='normal', mimeType='application/x-font-truetype')] public var wrFont:Class; } } The other alternative to SystemFont for embedding, is "Source" followed by the name of an actual font file. If you try to embed one of the aformentioned single file fonts as a Source file (as opposed to SystemFont) and specify fontStyle='italic', then the mxmlc compiler will return an error and say there is no italic info in the font file. So up to now I have only been embedding these fonts as "SystemFont". The problem is, flex uses two different font compilers internally for Source embedding and SystemFont embedding. For source font embeds it uses the "Batik" compiler and for SystemFont, the JRE (Java Runtime) font compiler. Well actually the Batik is considered a superior compiler and generally produces better looking fonts. And also if you mix normal fonts compiled with Batik and italic compiled with JRE, sometimes the line spacing is different for the two, and it doesn't look right. So does anyone have an idea how to get mxmlc to do italic and bold for these single file fonts when embedding as "Source". Would there be a way using C++ or whatever to construct an "italic" font file from the SystemFont for such a font in windows.

    Read the article

  • How to know when a user has really released a key in Java?

    - by Luis Soeiro
    (Edited for clarity) I want to detect when a user presses and releases a key in Java Swing, ignoring the keyboard auto repeat feature. I also would like a pure Java approach the works on Linux, Mac OS and Windows. Requirements: When the user presses some key I want to know what key is that; When the user releases some key, I want to know what key is that; I want to ignore the system auto repeat options: I want to receive just one keypress event for each key press and just one key release event for each key release; If possible, I would use items 1 to 3 to know if the user is holding more than one key at a time (i.e, she hits 'a' and without releasing it, she hits "Enter"). The problem I'm facing in Java is that under Linux, when the user holds some key, there are many keyPress and keyRelease events being fired (because of the keyboard repeat feature). I've tried some approaches with no success: Get the last time a key event occurred - in Linux, they seem to be zero for key repeat, however, in Mac OS they are not; Consider an event only if the current keyCode is different from the last one - this way the user can't hit twice the same key in a row; Here is the basic (non working) part of code: import java.awt.event.KeyListener; public class Example implements KeyListener { public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { System.out.println("KeyPressed: "+e.getKeyCode()+", ts="+e.getWhen()); } public void keyReleased(KeyEvent e) { System.out.println("KeyReleased: "+e.getKeyCode()+", ts="+e.getWhen()); } } When a user holds a key (i.e, 'p') the system shows: KeyPressed: 80, ts=1253637271673 KeyReleased: 80, ts=1253637271923 KeyPressed: 80, ts=1253637271923 KeyReleased: 80, ts=1253637271956 KeyPressed: 80, ts=1253637271956 KeyReleased: 80, ts=1253637271990 KeyPressed: 80, ts=1253637271990 KeyReleased: 80, ts=1253637272023 KeyPressed: 80, ts=1253637272023 ... At least under Linux, the JVM keeps resending all the key events when a key is being hold. To make things more difficult, on my system (Kubuntu 9.04 Core 2 Duo) the timestamps keep changing. The JVM sends a key new release and new key press with the same timestamp. This makes it hard to know when a key is really released. Any ideas? Thanks

    Read the article

  • MKMap showing detail of annotations

    - by yeohchan
    I have encountered a problem of populating the description for each annotation. Each annotation works, but there is somehow an area when trying to click on it. Here is the code. the one in bold is the one that has the problem. -(void)viewDidLoad{ FlickrFetcher *fetcher=[FlickrFetcher sharedInstance]; NSArray *rec=[fetcher recentGeoTaggedPhotos]; for(NSDictionary *dic in rec){ NSLog(@"%@",dic); NSDictionary *string = [fetcher locationForPhotoID:[dic objectForKey:@"id"]]; double latitude = [[string objectForKey:@"latitude"] doubleValue]; double longitude = [[string objectForKey:@"longitude"]doubleValue]; if(latitude !=0 && latitude != 0 ){ CLLocationCoordinate2D coordinate1 = { latitude,longitude }; **NSDictionary *adress=[NSDictionary dictionaryWithObjectsAndKeys:[dic objectForKey:@"owner"],[dic objectForKey:@"title"],nil];** MKPlacemark *anArea=[[MKPlacemark alloc]initWithCoordinate:coordinate1 addressDictionary:adress]; [mapView addAnnotation:anArea]; } } } Here is what the Flickr class does: #import <Foundation/Foundation.h> #define TEST_HIGH_NETWORK_LATENCY 0 typedef enum { FlickrFetcherPhotoFormatSquare, FlickrFetcherPhotoFormatLarge } FlickrFetcherPhotoFormat; @interface FlickrFetcher : NSObject { NSManagedObjectModel *managedObjectModel; NSManagedObjectContext *managedObjectContext; NSPersistentStoreCoordinator *persistentStoreCoordinator; } // Returns the 'singleton' instance of this class + (id)sharedInstance; // // Local Database Access // // Checks to see if any database exists on disk - (BOOL)databaseExists; // Returns the NSManagedObjectContext for inserting and fetching objects into the store - (NSManagedObjectContext *)managedObjectContext; // Returns an array of objects already in the database for the given Entity Name and Predicate - (NSArray *)fetchManagedObjectsForEntity:(NSString*)entityName withPredicate:(NSPredicate*)predicate; // Returns an NSFetchedResultsController for a given Entity Name and Predicate - (NSFetchedResultsController *)fetchedResultsControllerForEntity:(NSString*)entityName withPredicate:(NSPredicate*)predicate; // // Flickr API access // NOTE: these are blocking methods that wrap the Flickr API and wait on the results of a network request // // Returns an array of Flickr photo information for photos with the given tag - (NSArray *)photosForUser:(NSString *)username; // Returns an array of the most recent geo-tagged photos - (NSArray *)recentGeoTaggedPhotos; // Returns a dictionary of user info for a given user ID. individual photos contain a user ID keyed as "owner" - (NSString *)usernameForUserID:(NSString *)userID; // Returns the photo for a given server, id and secret - (NSData *)dataForPhotoID:(NSString *)photoID fromFarm:(NSString *)farm onServer:(NSString *)server withSecret:(NSString *)secret inFormat:(FlickrFetcherPhotoFormat)format; // Returns a dictionary containing the latitue and longitude where the photo was taken (among other information) - (NSDictionary *)locationForPhotoID:(NSString *)photoID; @end

    Read the article

  • Fleunt NHibernate not working outside of nunit test fixtures

    - by thorkia
    Okay, here is my problem... I created a Data Layer using the RTM Fluent Nhibernate. My create session code looks like this: _session = Fluently.Configure(). Database(SQLiteConfiguration.Standard.UsingFile("Data.s3db")) .Mappings( m => { m.FluentMappings.AddFromAssemblyOf<ProductMap>(); m.FluentMappings.AddFromAssemblyOf<ProductLogMap>(); }) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); When I reference the module in a test project, then create a test fixture that looks something like this: [Test] public void CanAddProduct() { var product = new Product {Code = "9", Name = "Test 9"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); using (ISession session = OrmHelper.OpenSession()) { var fromDb = session.Get<Product>(product.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(fromDb, product); Assert.AreEqual(fromDb.Id, product.Id); } My tests pass. When I open up the created SQLite DB, the new Product with Code 9 is in it. the tables for Product and ProductLog are there. Now, when I create a new console application, and reference the same library, do something like this: Product product = new Product() {Code = "10", Name = "Hello"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); Console.WriteLine(product.Id); Console.ReadLine(); It doesn't work. I actually get pretty nasty exception chain. To save you lots of head aches, here is the summary: Top Level exception: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.\r\n\r\n The PotentialReasons collection is empty The Inner exception: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Both the unit test library and the console application reference the exact same version of System.Data.SQLite. Both projects have the exact same DLLs in the debug folder. I even tried copying SQLite DB the unit test library created into the debug directory of the console app, and removed the build schema lines and it still fails If anyone can help me figure out why this won't work outside of my unit tests it would be greatly appreciated. This crazy bug has me at a stand still.

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >