Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 702/1077 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • Thinking Sphinx not working in test mode

    - by J. Pablo Fernández
    I'm trying to get Thinking Sphinx to work in test mode in Rails. Basically this: ThinkingSphinx::Test.init ThinkingSphinx::Test.start freezes and never comes back. My test and devel configuration is the same for test and devel: dry_setting: &dry_setting adapter: mysql host: localhost encoding: utf8 username: rails password: blahblah development: <<: *dry_setting database: proj_devel socket: /tmp/mysql.sock # sphinx requires it test: <<: *dry_setting database: proj_test socket: /tmp/mysql.sock # sphinx requires it and sphinx.yml development: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin test: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin production: enable_star: 1 min_infix_len: 2 The generated config files, config/development.sphinx.conf and config/test.sphinx.conf only differ in database names, directories and similar things; nothing functional. Generating the index for devel goes without an issue $ rake ts:in (in /Users/pupeno/proj) default config Generating Configuration to /Users/pupeno/proj/config/development.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/development.sphinx.conf'... indexing index 'user_core'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.8% done total 7 docs, 422 bytes total 0.098 sec, 4320.80 bytes/sec, 71.67 docs/sec indexing index 'user_delta'... collected 0 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, nan% done total 0 docs, 0 bytes total 0.010 sec, 0.00 bytes/sec, 0.00 docs/sec distributed index 'user' can not be directly indexed; skipping. but when I try to do it for test it freezes: $ RAILS_ENV=test rake ts:in (in /Users/pupeno/proj) DEPRECATION WARNING: require "activeresource" is deprecated and will be removed in Rails 3. Use require "active_resource" instead.. (called from /Users/pupeno/.rvm/gems/ruby-1.8.7-p249/gems/activeresource-2.3.5/lib/activeresource.rb:2) default config Generating Configuration to /Users/pupeno/proj/config/test.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/test.sphinx.conf'... indexing index 'user_core'... It's been there for more than 10 minutes, the user table has 4 records. The database directory look quite diferently, but I don't know what to make of it: $ ls -l db/sphinx/development/ total 96 -rw-r--r-- 1 pupeno staff 196 Mar 11 18:10 user_core.spa -rw-r--r-- 1 pupeno staff 4982 Mar 11 18:10 user_core.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_core.sph -rw-r--r-- 1 pupeno staff 3067 Mar 11 18:10 user_core.spi -rw-r--r-- 1 pupeno staff 84 Mar 11 18:10 user_core.spm -rw-r--r-- 1 pupeno staff 6832 Mar 11 18:10 user_core.spp -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spa -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_delta.sph -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spi -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spm -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spp $ ls -l db/sphinx/test/ total 0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.spl -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp1 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp2 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp7 Nothing gets added to a log when this happens. Any ideas where to go from here? I can run the command line manually: /opt/local/bin/indexer --config config/test.sphinx.conf --all which generates the output as the rake ts:in, so no help there.

    Read the article

  • overloaded stream insertion operator with a vector

    - by Julz
    hi, i'm trying to write an overloaded stream insertion operator for a class who's only member is a vector. i dont really know what i'm doing. (lets make that clear) it's a vector of "Points" which is a struct containing two doubles. i figure what i want is to insert user input (a bunch of doubles) into a stream that i then send to a modifier method? i keep working off other stream insertion examples such as... std::ostream& operator<< (std::ostream& o, Fred const& fred) { return o << fred.i_; } but when i try a similar..... istream & operator >> (istream &inStream, Polygon &vertStr) { inStream >> ws; inStream >> vertStr.vertices; return inStream; } i get an error "no match for operator etc etc. if i leave off the .vertices it compiles but i figure it's not right? (vertices is the name of my vector ) and even if it is right, i dont actually know what syntax to use in my driver to use it? also not %100 on what my modifier method needs to look like. here's my Polygon class //header #ifndef POLYGON_H #define POLYGON_H #include "Segment.h" #include <vector> class Polygon { friend std::istream & operator >> (std::istream &inStream, Polygon &vertStr); public: //Constructor Polygon(const Point &theVerts); //Default Constructor Polygon(); //Copy Constructor Polygon(const Polygon &polyCopy); //Accessor/Modifier methods inline std::vector<Point> getVector() const {return vertices;} //Return number of Vector elements inline int sizeOfVect() const {return (int) vertices.capacity();} //add Point elements to vector inline void setVertices(const Point &theVerts){vertices.push_back (theVerts);} private: std::vector<Point> vertices; }; #endif //Body using namespace std; #include "Polygon.h" // Constructor Polygon::Polygon(const Point &theVerts) { vertices.push_back (theVerts); } //Copy Constructor Polygon::Polygon(const Polygon &polyCopy) { vertices = polyCopy.vertices; } //Default Constructor Polygon::Polygon(){} istream & operator >> (istream &inStream, Polygon &vertStr) { inStream >> ws; inStream >> vertStr; return inStream; } any help greatly appreciated, sorry to be so vague, a lecturer has just kind of given us a brief example of stream insertion then left us on our own thanks. oh i realise there are probably many other problems that need fixing

    Read the article

  • GAE formpreview

    - by Niklas R
    I'm trying to enable form preview with Google App Engine. Getting the following error message I suspect being mistaken somewhere: ... handler = handler_class() TypeError: __call__() takes at least 2 arguments (1 given) Can you tell what's wrong with my attempt? Here is some of the code. from django.contrib.formtools.preview import FormPreview class AFormPreview(FormPreview): def done(self, request, cleaned_data): # Do something with the cleaned_data, then redirect # to a "success" page. self.response.out.write('Done!') class AForm(djangoforms.ModelForm): text = forms.CharField(widget=forms.Textarea(attrs={'rows':'11','cols':'70','class':'foo'}),label=_("content").capitalize()) def clean(self): cleaned_data = self.clean_data name = cleaned_data.get("name") if not name: raise forms.ValidationError("No name.") # Always return the full collection of cleaned data. return cleaned_data class Meta: model = A fields = ['category','currency','price','title','phonenumber','postaladress','name','text','email'] #change the order ... ('/aformpreview/([^/]*)', AFormPreview(AForm)), UPDATE: Here's a complete app where the preview is not working. Any ideas are most welcome: import cgi from google.appengine.api import users from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext.db import djangoforms class Item(db.Model): name = db.StringProperty() quantity = db.IntegerProperty(default=1) target_price = db.FloatProperty() priority = db.StringProperty(default='Medium',choices=[ 'High', 'Medium', 'Low']) entry_time = db.DateTimeProperty(auto_now_add=True) added_by = db.UserProperty() class ItemForm(djangoforms.ModelForm): class Meta: model = Item exclude = ['added_by'] from django.contrib.formtools.preview import FormPreview class ItemFormPreview(FormPreview): def done(self, request, cleaned_data): # Do something with the cleaned_data, then redirect # to a "success" page. return HttpResponseRedirect('/') class MainPage(webapp.RequestHandler): def get(self): self.response.out.write('<html><body>' '<form method="POST" ' 'action="/">' '<table>') # This generates our shopping list form and writes it in the response self.response.out.write(ItemForm()) self.response.out.write('</table>' '<input type="submit">' '</form></body></html>') def post(self): data = ItemForm(data=self.request.POST) if data.is_valid(): # Save the data, and redirect to the view page entity = data.save(commit=False) entity.added_by = users.get_current_user() entity.put() self.redirect('/items.html') else: # Reprint the form self.response.out.write('<html><body>' '<form method="POST" ' 'action="/">' '<table>') self.response.out.write(data) self.response.out.write('</table>' '<input type="submit">' '</form></body></html>') class ItemPage(webapp.RequestHandler): def get(self): query = db.GqlQuery("SELECT * FROM Item ORDER BY name") for item in query: self.response.out.write('<a href="/edit?id=%d">Edit</a> - ' % item.key().id()) self.response.out.write("%s - Need to buy %d, cost $%0.2f each<br>" % (item.name, item.quantity, item.target_price)) class EditPage(webapp.RequestHandler): def get(self): id = int(self.request.get('id')) item = Item.get(db.Key.from_path('Item', id)) self.response.out.write('<html><body>' '<form method="POST" ' 'action="/edit">' '<table>') self.response.out.write(ItemForm(instance=item)) self.response.out.write('</table>' '<input type="hidden" name="_id" value="%s">' '<input type="submit">' '</form></body></html>' % id) def post(self): id = int(self.request.get('_id')) item = Item.get(db.Key.from_path('Item', id)) data = ItemForm(data=self.request.POST, instance=item) if data.is_valid(): # Save the data, and redirect to the view page entity = data.save(commit=False) entity.added_by = users.get_current_user() entity.put() self.redirect('/items.html') else: # Reprint the form self.response.out.write('<html><body>' '<form method="POST" ' 'action="/edit">' '<table>') self.response.out.write(data) self.response.out.write('</table>' '<input type="hidden" name="_id" value="%s">' '<input type="submit">' '</form></body></html>' % id) def main(): application = webapp.WSGIApplication( [('/', MainPage), ('/edit', EditPage), ('/items.html', ItemPage), ('/itemformpreview', ItemFormPreview(ItemForm)), ], debug=True) run_wsgi_app(application)

    Read the article

  • Change find method in database search so that it isn't case sensitive in Rails app

    - by Ryan
    Hello, I am learning Rails and have created a work-in-progress app that does one-word searches on a database of shortcut keys for various programs (http://keyboardcuts.heroku.com/shortcuts/home). The search method in the Shortcut model is the following: def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['action LIKE ? OR application LIKE ?', search_condition, search_condition]) end ...where 'action' and 'application' are columns in a SQLite table. (source: https://we.riseup.net/rails/simple-search-tutorial) For some reason, the search seems to be case sensitive (you can see this by searching 'Paste' vs. 'paste'). Can anyone help me figure out why and what I can do to make it not case sensitive? If not, can you at least point me in the right direction? Database creation: I first copied shortcuts from various website into Excel and saved it as a CSV. Then I migrated the database and filled it with the data using db:seed and a small script I wrote (I viewed the database and it looked fine). To get the SQLite database to the server, I used Taps as outline by the Heroku website (http://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/). I am using Ubuntu. Please let me know if you need more information. Thanks in advance for you help, very much appreciated! Ryan

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • People Picker can't find Forms Authentication Users in WSS 3.0

    - by beyti
    I used a lot of tutorials to turn my windows authenticated default wss web app to use Forms Authentication. What I've done since; 1. created a web app. and a site in wss 3.0. Made its anonymous access enabled for all site content. This wss app is in the "wss3" server. 2. created a membership db with regsql.exe in .net framework folder.Created it with its default settings, like aspnetdb named database.This db is in the "sqlserver" server. 3. gave db.owner permission to the web app. admin of wss to the aspnetdb database. The user is registered under the same domain as the sql and the wss machines. 4. configured site's web.config file with following changes/adds: ..added the connectionString: <connectionStrings> <clear /> <add name="LocalSqlServer" connectionString="server=sqlserver;database=aspnetdb; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> </connectionStrings> ..added the membershipProvider: <membership> <providers> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression="" /> </providers> </membership> ..also checked the peoplepicker settings: <PeoplePickerWildcards> <clear /> <add key="AspNetSqlMembershipProvider" value="%" /> </PeoplePickerWildcards> 5. After all, I changed the application provider of the site I created to use forms. Gave it the provider name of "AspNetSqlMembershipProvider". 6. I've created some users for Forms Authentication via ASP.net Configuration page by visual studio. 7. Checked the users in the db aspnetdb. They are there. 8. Tried to login to wss with one of them. Successfully logged in. With no privilages ofcourse. 9. Tried to give permission via Web Application Policy to that user which logged in. 10. People Picker couldn't find it at all. Any of the forms users couldn't be found. But it clearly tells that AD connection is also changed that none of the AD users couldn't be found either. It seems I'm missing something to configure about people picker. Any help would be appreciated. Thanks in advance. Beytan

    Read the article

  • Can you review my Perl rewrite of Cucumber?

    - by Evgeny
    There is a team working on acceptance testing X11 GUI application in our company, and they created a monstrous acceptance testing framework that drives the GUI as well as running scenarios. The framework is written using Perl 5, and scenario files look more like very complex Perl programs (thousands of lines long with procedural-programming style) than acceptance tests. I recently learned Ruby's Cucumber, and generally have been using Ruby for quite a lot of time. But unfortunately I can't just shove Ruby to replace Perl because the people who are writing all of this don't know Ruby and it's quite certain that they wont want "this" kind of interruption. So to bring Ruby's Cucumber a bit closer to their work, I rewrote it using Perl 5. Unfortunately I am really not a Perl programmer, and would love to get a code review and to hear suggestions from people who both know Perl and Cucumber. Hi Perl/Cucumber StackOverflow users - please help me create this "open source" attempt to re-create Cucumber for Perl! I would love to hear your comments and will accept any acceptable help. The minimal source code is here: http://github.com/kesor/p5-cucumber Thank you for your attention. For those not familiar with cucumber - please take just one small moment to take a look at this one small little page: http://wiki.github.com/aslakhellesoy/cucumber

    Read the article

  • Why do I get the error "Only antlib URIs can be located from the URI alone,not the URI" when trying to run hibernate tools in my build.xml

    - by Casbah
    I'm trying to run hibernate tools in an ant build to generate ddl from my JPA annotations. Ant dies on the taskdef tag. I've tried with ant 1.7, 1.6.5, and 1.6 to no avail. I've tried both in eclipse and outside. I've tried including all the hbn jars in the hibernate-tools path and not. Note that I based my build file on this post: http://stackoverflow.com/questions/281890/hibernate-jpa-to-ddl-command-line-tools I'm running eclipse 3.4 with WTP 3.0.1 and MyEclipse 7.1 on Ubuntu 8. Build.xml: <project name="generateddl" default="generate-ddl"> <path id="hibernate-tools"> <pathelement location="../libraries/hibernate-tools/hibernate-tools.jar" /> <pathelement location="../libraries/hibernate-tools/bsh-2.0b1.jar" /> <pathelement location="../libraries/hibernate-tools/freemarker.jar" /> <pathelement location="../libraries/jtds/jtds-1.2.2.jar" /> <pathelement location="../libraries/hibernate-tools/jtidy-r8-20060801.jar" /> </path> <taskdef classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="hibernate-tools"/> <target name="generate-ddl" description="Export schema to DDL file"> <!-- compile model classes before running hibernatetool --> <!-- task definition; project.class.path contains all necessary libs <taskdef name="hibernatetool" classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="project.class.path" /> --> <hibernatetool destdir="sql"> <!-- check that directory exists --> <jpaconfiguration persistenceunit="default" /> <classpath> <dirset dir="WebRoot/WEB-INF/classes"> <include name="**/*"/> </dirset> </classpath> <hbm2ddl outputfilename="schemaexport.sql" format="true" export="false" drop="true" /> </hibernatetool> </target> Error message (ant -v): Apache Ant version 1.7.0 compiled on December 13 2006 Buildfile: /home/joe/workspace/bento/ant-generate-ddl.xml parsing buildfile /home/joe/workspace/bento/ant-generate-ddl.xml with URI = file:/home/joe/workspace/bento/ant-generate-ddl.xml Project base dir set to: /home/joe/workspace/bento [antlib:org.apache.tools.ant] Could not load definitions from resource org/apache/tools/ant/antlib.xml. It could not be found. BUILD FAILED /home/joe/workspace/bento/ant-generate-ddl.xml:12: Only antlib URIs can be located from the URI alone,not the URI at org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:216) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.parseBuildFile(InternalAntRunner.java:191) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:400) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) Total time: 195 milliseconds

    Read the article

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • Extending Python and Objective-C

    - by chpwn
    I'm a fan of clean code. I like my languages to be able to express what I'm trying to do, but I like the syntax to mirror that too. For example, I work on a lot of programs in Objective-C for jailbroken iPhones, which patch other code using the method_setImplementation() function of the runtime. Or, in PyObjC, I have to use the syntax UIView.initWithFrame_(), which is also pretty awful and unreadable with the way the method names are structured. In both cases, the language does not support this in syntax. I've found three basic ways that this is done: Insane macros. Take a look at this "CaptainHook", it does what I'm looking for in a usable way, but it isn't quite clean and is a major hack. There's also "Logos", which implements a very nice syntax, but is written in Perl parsing my code with a ton of regular expressions. This scares me. I like the idea of adding a %hook ClassName, but not by using regular expressions to parse C or Objective-C. Finally, there is Cycript. This is an extension to JavaScript which interfaces with the Objective-C runtime and allows you to use Objective-C style code in your JavaScript, and inject that into other processes. This is likely the cleanest as it actually uses a parser for the JavaScript, but I'm not a huge fan of that language in general. Should, and how should, I create an extension to Python and Objective-C to allow me to do this? Is it worth writing a parser for my language to transform the syntax into something nicer, if it is only in a very specialized niche like this? Should I just live with the horrible syntax of the default Objective-C hooking or PyObjC?

    Read the article

  • codes to convert from avi to asf

    - by George2
    Hello everyone, No matter what library/SDK to use, I want to convert from avi to asf very quickly (I could even sacrifice some quality of video and audio). I am working on Windows platform (Vista and 2008 Server), better .Net SDK/code, C++ code is also fine. :-) I learned from the below link, that there could be a very quick way to convert from avi to asf to support streaming better, as mentioned "could convert the video from AVI to ASF format using a simple copy (i.e. the content is the same, but container changes).". My question is after some hours of study and trial various SDK/tools, as a newbie, I do not know how to begin with so I am asking for reference sample code to do this task. :-) (as this is a different issue, we decide to start a new topic. :-) ) http://stackoverflow.com/questions/743220/streaming-avi-file-issue thanks in advance, George EDIT 1: I have tried to get the binary of ffmpeg from, http://ffmpeg.arrozcru.org/autobuilds/ffmpeg-latest-mingw32-static.tar.bz2 then run the following command, C:\software\ffmpeg-latest-mingw32-static\bin>ffmpeg.exe -i test.avi -acodec copy -vcodec copy test.asf FFmpeg version SVN-r18506, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-pthreads --enable-libvorbis --enable-libmp3lame --enable-libo penjpeg --enable-libtheora --enable-libspeex --enable-libxvid --enable-libfaad - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.25. 0 / 52.25. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 14 2009 04:04:47, gcc: 4.2.4 Input #0, avi, from 'test.avi': Duration: 00:00:44.86, start: 0.000000, bitrate: 5291 kb/s Stream #0.0: Video: msvideo1, rgb555le, 1280x1024, 5 tbr, 5 tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Output #0, asf, to 'test.asf': Stream #0.0: Video: CRAM / 0x4D415243, rgb555le, 1280x1024, q=2-31, 1k tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 224 fps=222 q=-1.0 Lsize= 29426kB time=44.80 bitrate=5380.7kbits/s video:26910kB audio:1932kB global headers:0kB muxing overhead 2.023317% C:\software\ffmpeg-latest-mingw32-static\bin> http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6 then have the following error when using Windows Media Player to play it, does anyone have any ideas? http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6

    Read the article

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Java parsing XML document gives "Content not allowed in prolog." error

    - by thechiman
    I am writing a program in Java that takes a custom XML file and parses it. I'm using the XML file for storage. I am getting the following error in Eclipse. [Fatal Error] :1:1: Content is not allowed in prolog. org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:239) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283 ) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208) at me.ericso.psusoc.RequirementSatisfier.parseXML(RequirementSatisfier.java:61) at me.ericso.psusoc.RequirementSatisfier.getCourses(RequirementSatisfier.java:35) at me.ericso.psusoc.programs.RequirementSatisfierProgram.main(RequirementSatisfierProgram.java:23 ) The beginning of the XML file is included: <?xml version="1.0" ?> <PSU> <Major id="IST"> <name>Information Science and Technology</name> <degree>B.S.</degree> <option> Information Systems: Design and Development Option</option> <requirements> <firstlevel type="General_Education" credits="45"> <component type="Writing_Speaking">GWS</component> <component type="Quantification">GQ</component> The program is able to read in the XML file but when I call DocumentBuilder.parse(XMLFile) to get a parsed org.w3c.dom.Document, I get the error above. It doesn't seem to me that I have invalid content in the prolog of my XML file. I can't figure out what is wrong. Please help. Thanks.

    Read the article

  • WCF and Firewall

    - by Jim Biddison
    I have written a very simple WCF service (hosted in IIS) and web application that talks to it. If they are both in the same domain, it works fine. But when I put them in different domains (on different sides of a firewall), then the web applications says: The request for security token could not be satisfied because authentication failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ServiceModel.FaultException: The request for security token could not be satisfied because authentication failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. The revelant part of the service web.config is: <system.serviceModel> <services> <service behaviorConfiguration="MigrationHelperBehavior" name="MigrationHelper"> <endpoint address="" binding="wsHttpBinding" contract="IMigrationHelper"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <endpoint binding="httpBinding" contract="IMigrationHelper" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MigrationHelperBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> The web appliation (client) web.config says: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMigrationHelper" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://mydomain.com/MigrationHelper.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMigrationHelper" contract="MyNewServiceReference.IMigrationHelper" name="WSHttpBinding_IMigrationHelper"> <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel> I believe both these are just the default that VS 2008 created for me. So my question is, how does one go about configurating the service and client, when they are not in the same domain? Thanks .Jim Biddison

    Read the article

  • Using Java Executor on AppEngine causes AccessControlException

    - by Drew
    How do you get java.util.concurrent.Executor or CompletionService to work on Google AppEngine? The classes are all officially white-listed, but I get a runtime security error when trying to submit asynchronous tasks. Code: // uses the async API but this factory makes it so that tasks really // happen sequentially Executor executor = java.util.concurrent.Executors.newSingleThreadExecutor(); // wrap Executor in CompletionService CompletionService<String> completionService = new ExecutorCompletionService<String>(executor); final SomeTask someTask = new SomeTask(); // this line throws exception completionService.submit(new Callable<String>(){ public String call() { return someTask.doNothing("blah"); } }); // alternately, send Runnable task directly to Executor, // which also throws an exception executor.execute(new Runnable(){ public void run() { someTask.doNothing("blah"); } }); } private class SomeTask{ public String doNothing(String message){ return message; } } Exception: java.security.AccessControlException: access denied (java.lang.RuntimePermission modifyThreadGroup) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkAccess(DevAppServerFactory.java:191) at java.lang.ThreadGroup.checkAccess(ThreadGroup.java:288) at java.lang.Thread.init(Thread.java:332) at java.lang.Thread.(Thread.java:565) at java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:542) at java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:672) at java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:697) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:652) at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:590) at java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:152) This code works fine when run on Tomcat or via command-line JVM. However, it chokes in the AppEngine SDK Jetty container (tried with Eclipse plugin and the maven-gae-plugin). AppEngine is likely designed to not allow potentially dangerous programs to run, so I could see them completely disabling thread creation. However, why would Google allow you to create a class, but not allow you to call methods on it? White-listing java.util.concurrent is misleading. Is there some other way to do parallel/simultaneous/concurrent tasks on GAE?

    Read the article

  • What's all this fuss about?

    - by atch
    Hi guys, At the beginning I want to state that it is not my intention to upset anyone who uses/like language other than C++. I'm saying that due to the fact that on one forum everytime when I've tried to ask questions of similiar nature I was almost always accused of trying to create a raw. Ok that's having done this is my question: I don't understand why java/c# creators thought/think that having something like vm and having source code compiled to bytecode instead of native code is in a long run any advantage. And why having function compiled for a first time when they are invoked is any advantege? And what's the story about write once run everywhere? When I think about the business of having something written once and it can run everywhere - well in theory this is all well. But I know for a fact that in practice it doesn't look that well at all. It is rather like write once test everywhere. And why would I preffer something to be compiled on runtime instead of compiletime. If I would have to wait even one hour longer for program to be installed once and for all and all the compilation would be done and nothing would be compiled after that I would preffer that. And I don't really know how it works in the real world (I'm a student never worked in IT business) but for example if I have working program written in C++ for Windows and I have wish to move it to another platform wouldn't I have to take my source code and compile it on desired machine? So in other words isn't that rather problem of having compiler which will compile source code on different machines (as far as I'm concerned there is just one C++ and source code will look identical in every machine). And last but not least if you think about it how many programs they are which are really word porting? I personnally can think of 3 maybe four.

    Read the article

  • What is the best Binary Decision Diagram library for Java?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD JBDD bddbddb If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

  • How to replace openSSL calls with C# code?

    - by fonix232
    Hey there again! Today I ran into a problem when I was making a new theme creator for chrome. As you may know, Chrome uses a "new" file format, called CRX, to manage it's plugins and themes. It is a basic zip file, but a bit modified: "Cr24" + derkey + signature + zipFile And here comes the problem. There are only two CRX creators, written in Ruby or Python. I don't know neither language too much (had some basic experience in Python though, but mostly with PyS60), so I would like to ask you to help me convert this python app to a C# code that doesn't depend on external programs. Also, here is the source of crxmake.py: #!/usr/bin/python # Cribbed from http://github.com/Constellation/crxmake/blob/master/lib/crxmake.rb # and http://src.chromium.org/viewvc/chrome/trunk/src/chrome/tools/extensions/chromium_extension.py?revision=14872&content-type=text/plain&pathrev=14872 # from: http://grack.com/blog/2009/11/09/packing-chrome-extensions-in-python/ import sys from array import * from subprocess import * import os import tempfile def main(argv): arg0,dir,key,output = argv # zip up the directory input = dir + ".zip" if not os.path.exists(input): os.system("cd %(dir)s; zip -r ../%(input)s . -x '.svn/*'" % locals()) else: print "'%s' already exists using it" % input # Sign the zip file with the private key in PEM format signature = Popen(["openssl", "sha1", "-sign", key, input], stdout=PIPE).stdout.read(); # Convert the PEM key to DER (and extract the public form) for inclusion in the CRX header derkey = Popen(["openssl", "rsa", "-pubout", "-inform", "PEM", "-outform", "DER", "-in", key], stdout=PIPE).stdout.read(); out=open(output, "wb"); out.write("Cr24") # Extension file magic number header = array("l"); header.append(2); # Version 2 header.append(len(derkey)); header.append(len(signature)); header.tofile(out); out.write(derkey) out.write(signature) out.write(open(input).read()) os.unlink(input) print "Done." if __name__ == '__main__': main(sys.argv) Please could you help me?

    Read the article

  • configure batch to sent minute info instead of entire stdout

    - by Daniel
    Hi all, I am working on a RedHat server along with several other users. We use the batch utility to set a job queue. Some of the programs that I use write stuff to stdout during the run, with info on who much data has been processed to far and estimated time until completion etc. batch -q z at> myScript -i somefile -o someotherfile By default, the batch util send an email to me (since I configured it using .forward) with the entire output from stdout. Since the scripts writes something to stdout a few times each second, the amount of log-stuff I get from a two-day script can be ˜20 Mbs. Clearly not what I want. I can of course pipe stdout to a file like so batch -q z at> myScript -i somefile -o someotherfile > myscript.stdout.log but then I get a blank e-mail from the util. So to my question: Is it possible to configure batch so that it sends time the job started and ended, or run time or some oth valuable information to me, instead of a 20 Mb mail or a blank mail? Note that the scripts that I use are binaries and I cannot customize the code to output less info in the first place (which would be the optimal solution I guess). Thanks /Daniel

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • Maven doesn't see my <repository> in <dependencyManagement>

    - by Ondra Žižka
    To make Maven "deploy" to a directory, I use this: <distributionManagement> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file://${project.basedir}/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <!-- <url>https://repository.jboss.org/nexus/content/repositories/snapshots</url> --> <url>file://${project.basedir}/dist-maven</url> </snapshotRepository> </distributionManagement> This appears in the efffective pom. ... <distributionManagement> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </snapshotRepository> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> </distributionManagement> But still, Maven insists that it's not there: [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1] [INFO] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) [INFO] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) [INFO] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) [INFO] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) [INFO] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) [INFO] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [INFO] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [INFO] at java.lang.reflect.Method.invoke(Method.java:601) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) [INFO] Caused by: org.apache.maven.plugin.MojoExecutionException: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.plugin.deploy.DeployMojo.getDeploymentRepository(DeployMojo.java:235) [INFO] at org.apache.maven.plugin.deploy.DeployMojo.execute(DeployMojo.java:118) [INFO] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) [INFO] ... 19 more I am using it through the maven-release-plugin. What's wrong?

    Read the article

  • How to make sure Solr/Lucene won't die with java.lang.OutOfMemoryError?

    - by taw
    I'm really puzzled why it keeps dying with java.lang.OutOfMemoryError during indexing even though it has a few GBs of memory. Is there a fundamental reason why it needs manual tweaking of config files / jvm parameters instead of it just figuring out how much memory is available and limiting itself to that? No other programs except Solr ever have this kind of problem. Yes, I can keep tweaking JVM heap size every time such crashes happen, but this is all so backwards. Here's stack trace of the latest such crash in case it is relevant: SEVERE: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.<init>(String.java:216) at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122) at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:169) at org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:701) at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:208) at org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:676) at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667) at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:94) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:245) at org.apache.lucene.search.Searcher.search(Searcher.java:171) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:988) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • cleaning up noise in an edge detection algoritum

    - by Faken
    I recently wrote an extremely basic edge detection algorithm that works on an array of chars. The program was meant to detect the edges of blobs of a single particular value on the array and worked by simply looking left, right, up and down on the array element and checking if one of those values is not the same as the value it was currently looking at. The goal was not to produce a mathematical line but rather a set of ordered points that represented a descritized closed loop edge. The algorithm works perfectly fine, except that my data contained a bit of noise hence would randomly produce edges where there should be no edges. This in turn wreaked havoc on some of my other programs down the line. There is two types of noise that the data contains. The first type is fairly sparse and somewhat random. The second type is a semi continuous straight line on the x=y axis. I know the source of the first type of noise, its a feature of the data and there is nothing i can do about it. As for the second type, i know it's my program's fault for causing it...though i haven't a hot clue exactly what is causing it. My question is: How should I go about removing the noise completely? I know that the correct data has points that are always beside each other and is very compact and ordered (with no gaps) and is a closed loop or multiple loops. The first type of noise is usually sparse and random, that could be easily taken care of by checking if any edges is next that noise point is also counted as an edge. If not, then the point is most defiantly noise and should be removed. However, the second type of noise, where we have a semi continuous line about x=y poses more of a problem. The line is sometimes continuous for random lengths (the longest was it went half way across my entire array unbroken). It is even possible for it to intersect the actual edge. Any ideas on how to do this?

    Read the article

  • SUA + Visual Studio + pthreads

    - by vasek7
    Hi, I cannot compile this code under SUA: #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> void * thread_function(void *arg) { printf("thread_function started. Arg was %s\n", (char *)arg); // pause for 3 seconds sleep(3); // exit and return a message to another thread // that may be waiting for us to finish pthread_exit ("thread one all done"); } int main() { int res; pthread_t a_thread; void *thread_result; // create a thread that starts to run ‘thread_function’ pthread_create (&a_thread, NULL, thread_function, (void*)"thread one"); printf("Waiting for thread to finish...\n"); // now wait for new thread to finish // and get any returned message in ‘thread_result’ pthread_join(a_thread, &thread_result); printf("Thread joined, it returned %s\n", (char *)thread_result); exit(0); } I'm running on Windows 7 Ultimate x64 with Visual Studio 2008 and 2010 and I have installed: Windows Subsystem for UNIX Utilities and SDK for Subsystem for UNIX-based Applications in Microsoft Windows 7 and Windows Server 2008 R2 Include directories property of Visual Studio project is set to "C:\Windows\SUA\usr\include" What I have to configure in order to compile and run (and possibly debug) pthreads programs in Visual Studio 2010 (or 2008)?

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >