Search Results

Search found 10355 results on 415 pages for 'shell extension'.

Page 369/415 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • iOS: Assignment to iVar in Block (ARC)

    - by manmal
    I have a readonly property isFinished in my interface file: typedef void (^MyFinishedBlock)(BOOL success, NSError *e); @interface TMSyncBase : NSObject { BOOL isFinished_; } @property (nonatomic, readonly) BOOL isFinished; and I want to set it to YES in a block at some point later, without creating a retain cycle to self: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { [weakSelf willChangeValueForKey:@"isFinished"]; weakSelf -> isFinished_ = YES; [weakSelf didChangeValueForKey:@"isFinished"]; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property } I'm unsure that this is the right way to do it (I hope I'm not embarrassing myself here ^^). Will this code leak, or break, or is it fine? Perhaps there is an easier way I have overlooked? SOLUTION Thanks to the answers below (especially Krzysztof Zablocki), I was shown the way to go here: Define isFinished as readwrite property in the class extension (somehow I missed that one) so no direct ivar assignment is needed, and change code to: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { MyClass *strongSelf = weakSelf; strongSelf.isFinished = YES; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property }

    Read the article

  • MVC View Model Intellesense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • HTTP Handler error when downloading files - SSL

    - by Chiefy
    Ok big problem as this is affecting two projects on our new server. We have a file that is downloaded by users, the files are downloaded using a HTTPHandler. Since moving the site to the server and setting SSL the downloads have stopped working and we get an error message "Unable to download DownloadDocument.ashx" from site". DownloadDocument.ashx is the handler page that is set in the web.config and the button that goes there is a hyperlink with the id of the document as a querystring. Ive read the article on http://support.microsoft.com/kb/316431 and read a few other requests on this site but nothing seems to be working. This problem only happens in IE and works fine when I run it on the server in http instead of https. public override void HandleRequest(HttpContext context) { Guid guid = new Guid(context.Request.QueryString["ID"]); DataTable dt = Documents.GetDocument(guid); if (dt != null) { context.Response.Cache.SetCacheability(HttpCacheability.Private); context.Response.AddHeader("content-disposition", string.Format("attachment; filename={0}", dt.Rows[0]["DocumentName"].ToString())); context.Response.AddHeader("Content-Transfer-Encoding", "binary"); context.Response.AddHeader("Content-Length", ((byte[])dt.Rows[0]["Document"]).Length.ToString()); context.Response.ContentType = string.Format("application/{0}", dt.Rows[0]["Extension"].ToString().Remove(0, 1)); context.Response.Buffer = true; context.Response.BinaryWrite((byte[])dt.Rows[0]["Document"]); context.Response.Flush(); context.Response.End(); } } The above is my current code for the request. Ive used the base handler on http://haacked.com/archive/2005/03/17/AnAbstractBoilerplateHttpHandler.aspx. Any ideas on what this might be and how we can fix it. Thanks in advance for all responses.

    Read the article

  • XCode 4.4 bundle version updates not picked up until subsequent build

    - by Mark Struzinski
    I'm probably missing something simple here. I am trying to auto increment my build number in XCode 4.4 only when archiving my application (in preparation for a TestFlight deployment). I have a working shell script that runs on the target and successfully updates the info.plist file for each build. My build configuration for archiving is name 'Ad-Hoc'. Here is the script: if [ $CONFIGURATION == Ad-Hoc ]; then echo "Ad-Hoc build. Bumping build#..." plist=${PROJECT_DIR}/${INFOPLIST_FILE} buildnum=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${plist}") if [[ "${buildnum}" == "" ]]; then echo "No build number in $plist" exit 2 fi buildnum=$(expr $buildnum + 1) /usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}" echo "Bumped build number to $buildnum" else echo $CONFIGURATION " build - Not bumping build number." fi This script updates the plist file appropriately and is reflected in XCode each time I archive. The problem is that the .ipa file that comes out of the archive process is still showing the previous build number. I have tried the following solutions with no success: Clean before build Clean build folder before build Move Run Script phase to directly after the Target Dependencies step in Build Phases Adding the script as a Run Script action in my scheme as a pre-action No matter what I do, when I look at the build log, I see that the info.plist file is being processed as one of the very first steps. It is always prior to my script running and updating the build number, which is, I assume, why the build number is never current in the .ipa file. Is there a way to force the Run Script phase to run before the info.plist file is processed?

    Read the article

  • delete multi-line block of text with internal flag in povray file

    - by Sibo Lin
    I have a pov-ray file, which defines a lot of cylinders and spheres. Sometimes these shapes are defined to have "color@", which makes the povray unrenderable. One solution I've found is to delete the offending cylinders and spheres. So a file that contains this text cylinder { < -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716 texture { colorO } } sphere { < -0.00950, 0.00357, 0.00227>, 0.00716 texture { color@ } } cylinder { < -0.00950, 0.00357, 0.00227>, < 0.00327, 0.00169, 0.00108>, 0.00716 texture { color@ } } sphere { < 0.15373, 0.00601, 0.18223>, 0.00716 texture { colorO } } would turn into this text cylinder { < -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716 texture { colorO } } sphere { < 0.15373, 0.00601, 0.18223>, 0.00716 texture { colorO } } Is there some way to do this replacement with a shell script? Preferably in tcsh. Thanks!

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Can a Generic Method handle both Reference and Nullable Value types?

    - by Adam Lassek
    I have a series of Extension methods to help with null-checking on IDataRecord objects, which I'm currently implementing like this: public static int? GetNullableInt32(this IDataRecord dr, int ordinal) { int? nullInt = null; return dr.IsDBNull(ordinal) ? nullInt : dr.GetInt32(ordinal); } public static int? GetNullableInt32(this IDataRecord dr, string fieldname) { int ordinal = dr.GetOrdinal(fieldname); return dr.GetNullableInt32(ordinal); } and so on, for each type I need to deal with. I'd like to reimplement these as a generic method, partly to reduce redundancy and partly to learn how to write generic methods in general. I've written this: public static Nullable<T> GetNullable<T>(this IDataRecord dr, int ordinal) { Nullable<T> nullValue = null; return dr.IsDBNull(ordinal) ? nullValue : (Nullable<T>) dr.GetValue(ordinal); } which works as long as T is a value type, but if T is a reference type it won't. This method would need to return either a Nullable type if T is a value type, and default(T) otherwise. How would I implement this behavior?

    Read the article

  • Vim hanging after parsing .vimrc (even a blank one) file on Solaris 10

    - by Seamus
    Hello all, I am having a problem with vim 7.2 hanging (for about 10 seconds) after it parses the .vimrc file. I had a similar issue in the past with tcsh on linux, but it was resolved by setting TERM to xterm-color. The same does not resolve the issue here. Any idea what may be causing this? $ env USER=redacted LOGNAME=redacted HOME=/home/redacted PATH=redacted MAIL=/var/spool/mail/redacted SHELL=/bin/tcsh TZ=redacted LC_COLLATE=C SSH_CLIENT=redacted SSH_CONNECTION=redacted SSH_TTY=/dev/pts/11 TERM=dtterm HOSTTYPE=sun4 VENDOR=sun OSTYPE=solaris MACHTYPE=sparc SHLVL=1 PWD=/home/redacted GROUP=redacted HOST=redacted REMOTEHOST=redacted QUOTA_CHECKED=1 WHOAMI=redacted HOSTNAME=redacted EDITOR=vim PRINTER=redacted INFOPATH=/software/gnu/gcc/2.8.1/sun4os5.10/info:/software/gnu/sun4os5/info:/software/gnu/emacs/20.3.1/sun4os5/info:/software/gnuish/sun4os5/info:/usr/local/gnu/info MANPATH=/software/gnu/gcc/2.8.1/sun4os5.10/man:/software/gnu/sun4os5/man:/software/gnu/emacs/20.3.1/sun4os5/man:/opt/rational/clearcase/doc/man:/usr/openwin/man:/usr/share/man:/usr/local/man:/usr/dt/man:/software/gnuish/sun4os5/man H_ARCH=sun4 H_ARCHOS=sun4os5 H_ARCHOS_SUB=sun4os5.10 H_OSTYPE=SUNOS H_OSREV=51000 T_ARCH=sun4 T_ARCHOS=sun4os5 T_ARCHOS_SUB=sun4os5.10 T_OSTYPE=SUNOS T_OSREV=51000 X11HOME=/usr/local/x11/sun4os5 OPENWINHOME=/usr/openwin LD_LIBRARY_PATH=/usr/dt/lib:/usr/openwin/lib MOTIFHOME=/usr/dt XINITRC=/usr/openwin/lib/Xinitrc GCC_REV=281

    Read the article

  • Using libgrib2c in c++ application, linker error "Undefined reference to..."

    - by Rich
    EDIT: If you're going to be doing things with GRIB files I would recommend the GDAL library which is backed by the Open Source Geospatial Foundation. You will save yourself a lot of headache :) I'm using Qt creator in Ubuntu creating a c++ app. I am attempting to use an external lib, libgrib2c.a, that has a header grib2.h. Everything compiles, but when it tries to link I get the error: undefined reference to 'seekgb(_IO_FILE*, long, long, long*, long*) I have tried wrapping the header file with: extern "C"{ #include "grib2.h" } But it didn't fix anything so I figured that was not my problem. In the .pro file I have the line: include($${ROOT}/Shared/common/commonLibs.pri) and in commonLibs.pri I have: INCLUDEPATH+=$${ROOT}/external_libs/g2clib/include LIBS+=-L$${ROOT}/external_libs/g2clib/lib LIBS+=-lgrib2c I am not encountering an error finding the library. If I do a nm command on the libgrib2c.a I get: nm libgrib2c.a | grep seekgb seekgb.o: 00000000 T seekgb And when I run qmake with the additional argument of LIBS+=-Wl,--verbose I can find the lib file in the output: attempt to open /usr/lib/libgrib2c.so failed attempt to open /usr/lib/libgrib2c.a failed attempt to open /mnt/sdb1/ESMF/App/ESMF_App/../external_libs/linux/qwt_6.0.2/lib/libgrib2c.so failed attempt to open /mnt/sdb1/ESMF/App/ESMF_App/../external_libs/linux/qwt_6.0.2/lib/libgrib2c.a failed attempt to open ..//Shared/Config/lib/libgrib2c.so failed attempt to open ..//Shared/Config/lib/libgrib2c.a failed attempt to open ..//external_libs/libssh2/lib/libgrib2c.so failed attempt to open ..//external_libs/libssh2/lib/libgrib2c.a failed attempt to open ..//external_libs/openssl/lib/libgrib2c.so failed attempt to open ..//external_libs/openssl/lib/libgrib2c.a failed attempt to open ..//external_libs/g2clib/lib/libgrib2c.so failed attempt to open ..//external_libs/g2clib/lib/libgrib2c.a succeeded Although it doesn't show any of the .o files in the library is this because it is a c library in my c++ app? in the .cpp file that I am trying to use the library I have: #include "gribreader.h" #include <stdio.h> #include <stdlib.h> #include <external_libs/g2clib/include/grib2.h> #include <Shared/logging/Logger.hpp> //------------------------------------------------------------------------------ /// Opens a GRIB file from disk. /// /// This function opens the grib file and searches through it for how many GRIB /// messages are contained as well as their starting locations. /// /// \param a_filePath. The path to the file to be opened. /// \return True if successful, false if not. //------------------------------------------------------------------------------ bool GRIBReader::OpenGRIB(std::string a_filePath) { LOG(notification)<<"Attempting to open grib file: "<< a_filePath; if(isOpen()) { CloseGRIB(); } m_filePath = a_filePath; m_filePtr = fopen(a_filePath.c_str(), "r"); if(m_filePtr == NULL) { LOG(error)<<"Unable to open file: " << a_filePath; return false; } LOG(notification)<<"Successfully opened GRIB file"; g2int currentMessageSize(1); g2int seekPosition(0); g2int lengthToBeginningOfGrib(0); g2int seekLength(32000); int i(0); int iterationLimit(300); m_GRIBMessageLocations.clear(); m_GRIBMessageSizes.clear(); while(i < iterationLimit) { seekgb(m_filePtr, seekPosition, seekLength, &lengthToBeginningOfGrib, &currentMessageSize); if(currentMessageSize != 0) { LOG(verbose) << "Adding GRIB message location " << lengthToBeginningOfGrib << " with length " << currentMessageSize; m_GRIBMessageLocations.push_back(lengthToBeginningOfGrib); m_GRIBMessageSizes.push_back(currentMessageSize); seekPosition = lengthToBeginningOfGrib + currentMessageSize; LOG(verbose) << "GRIB seek position moved to " << seekPosition; } else { LOG(notification)<<"End of GRIB file found, after "<< i << " GRIB messages."; break; } } if(i >= iterationLimit) { LOG(warning) << "The iteration limit of " << iterationLimit << "was reached while searching for GRIB messages"; } return true; } And the header grib2.h is as follows: #ifndef _grib2_H #define _grib2_H #include<stdio.h> #define G2_VERSION "g2clib-1.4.0" #ifdef __64BIT__ typedef int g2int; typedef unsigned int g2intu; #else typedef long g2int; typedef unsigned long g2intu; #endif typedef float g2float; struct gtemplate { g2int type; /* 3=Grid Defintion Template. */ /* 4=Product Defintion Template. */ /* 5=Data Representation Template. */ g2int num; /* template number. */ g2int maplen; /* number of entries in the static part */ /* of the template. */ g2int *map; /* num of octets of each entry in the */ /* static part of the template. */ g2int needext; /* indicates whether or not the template needs */ /* to be extended. */ g2int extlen; /* number of entries in the template extension. */ g2int *ext; /* num of octets of each entry in the extension */ /* part of the template. */ }; typedef struct gtemplate gtemplate; struct gribfield { g2int version,discipline; g2int *idsect; g2int idsectlen; unsigned char *local; g2int locallen; g2int ifldnum; g2int griddef,ngrdpts; g2int numoct_opt,interp_opt,num_opt; g2int *list_opt; g2int igdtnum,igdtlen; g2int *igdtmpl; g2int ipdtnum,ipdtlen; g2int *ipdtmpl; g2int num_coord; g2float *coord_list; g2int ndpts,idrtnum,idrtlen; g2int *idrtmpl; g2int unpacked; g2int expanded; g2int ibmap; g2int *bmap; g2float *fld; }; typedef struct gribfield gribfield; /* Prototypes for unpacking API */ void seekgb(FILE *,g2int ,g2int ,g2int *,g2int *); g2int g2_info(unsigned char *,g2int *,g2int *,g2int *,g2int *); g2int g2_getfld(unsigned char *,g2int ,g2int ,g2int ,gribfield **); void g2_free(gribfield *); /* Prototypes for packing API */ g2int g2_create(unsigned char *,g2int *,g2int *); g2int g2_addlocal(unsigned char *,unsigned char *,g2int ); g2int g2_addgrid(unsigned char *,g2int *,g2int *,g2int *,g2int ); g2int g2_addfield(unsigned char *,g2int ,g2int *, g2float *,g2int ,g2int ,g2int *, g2float *,g2int ,g2int ,g2int *); g2int g2_gribend(unsigned char *); /* Prototypes for supporting routines */ extern double int_power(double, g2int ); extern void mkieee(g2float *,g2int *,g2int); void rdieee(g2int *,g2float *,g2int ); extern gtemplate *getpdstemplate(g2int); extern gtemplate *extpdstemplate(g2int,g2int *); extern gtemplate *getdrstemplate(g2int); extern gtemplate *extdrstemplate(g2int,g2int *); extern gtemplate *getgridtemplate(g2int); extern gtemplate *extgridtemplate(g2int,g2int *); extern void simpack(g2float *,g2int,g2int *,unsigned char *,g2int *); extern void compack(g2float *,g2int,g2int,g2int *,unsigned char *,g2int *); void misspack(g2float *,g2int ,g2int ,g2int *, unsigned char *, g2int *); void gbit(unsigned char *,g2int *,g2int ,g2int ); void sbit(unsigned char *,g2int *,g2int ,g2int ); void gbits(unsigned char *,g2int *,g2int ,g2int ,g2int ,g2int ); void sbits(unsigned char *,g2int *,g2int ,g2int ,g2int ,g2int ); int pack_gp(g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *); #endif /* _grib2_H */ I have been scratching my head for two days on this. If anyone has an idea on what to do or can point me in some sort of direction, I'm stumped. Also, if you have any comments on how I can improve this post I'd love to hear them, kinda new at this posting thing. Usually I'm able to find an answer in the vast stores of knowledge already contained on the web.

    Read the article

  • Problems using (building?) native gem extensions on OS X

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Liskov Substition and Composition

    - by FlySwat
    Let say I have a class like this: public sealed class Foo { public void Bar { // Do Bar Stuff } } And I want to extend it to add something beyond what an extension method could do....My only option is composition: public class SuperFoo { private Foo _internalFoo; public SuperFoo() { _internalFoo = new Foo(); } public void Bar() { _internalFoo.Bar(); } public void Baz() { // Do Baz Stuff } } While this works, it is a lot of work...however I still run into a problem: public void AcceptsAFoo(Foo a) I can pass in a Foo here, but not a super Foo, because C# has no idea that SuperFoo truly does qualify in the Liskov Substitution sense...This means that my extended class via composition is of very limited use. So, the only way to fix it is to hope that the original API designers left an interface laying around: public interface IFoo { public Bar(); } public sealed class Foo : IFoo { // etc } Now, I can implement IFoo on SuperFoo (Which since SuperFoo already implements Foo, is just a matter of changing the signature). public class SuperFoo : IFoo And in the perfect world, the methods that consume Foo would consume IFoo's: public void AcceptsAFoo(IFoo a) Now, C# understands the relationship between SuperFoo and Foo due to the common interface and all is well. The big problem is that .NET seals lots of classes that would occasionally be nice to extend, and they don't usually implement a common interface, so API methods that take a Foo would not accept a SuperFoo and you can't add an overload. So, for all the composition fans out there....How do you get around this limitation? The only thing I can think of is to expose the internal Foo publicly, so that you can pass it on occasion, but that seems messy.

    Read the article

  • How to do buffered intersection checks on an IPoint?

    - by Quigrim
    How would I buffer an IPoint to do an intersection check using IRelationalOperator? I have, for arguments sake: IPoint p1 = xxx; IPoint p2 = yyy; IRelationalOperator rel1 = (IRelationalOperator)p1; if (rel.Intersects (p2)) // Do something But now I want to add a tolerance to my check, so I assume the right way to do that is by either buffering p1 or p2. Right? How do I add such a buffer? Note: the Intersects method I am using is an extension method I wrote to simplify my code. Here it is: /// <summary> /// Returns true if the IGeometry is intersected. /// This method negates the Disjoint method. /// </summary> /// <param name="relOp">The rel op.</param> /// <param name="other">The other.</param> /// <returns></returns> public static bool Intersects ( this IRelationalOperator relOp, IGeometry other) { return (!relOp.Disjoint (other)); }

    Read the article

  • Saving animated GIFs using urllib.urlopen (image saved does not animate)

    - by wenbert
    I have Apache2 + Django + X-sendfile. My problem is that when I upload an animated GIF, it won't "animate" when I output through the browser. Here is my code to display the image located outside the public accessible directory. def raw(request,uuid): target = str(uuid).split('.')[:-1][0] image = Uploads.objects.get(uuid=target) path = image.path filepath = os.path.join(path,"%s.%s" % (image.uuid,image.ext)) response = HttpResponse(mimetype=mimetypes.guess_type(filepath)) response['Content-Disposition']='filename="%s"'\ %smart_str(image.filename) response["X-Sendfile"] = filepath response['Content-length'] = os.stat(filepath).st_size return response UPDATE It turns out that it works. My problem is when I try to upload an image via URL. It probably doesn't save the entire GIF? def handle_url_file(request): """ Open a file from a URL. Split the file to get the filename and extension. Generate a random uuid using rand1() Then save the file. Return the UUID when successful. """ try: file = urllib.urlopen(request.POST['url']) randname = rand1(settings.RANDOM_ID_LENGTH) newfilename = request.POST['url'].split('/')[-1] ext = str(newfilename.split('.')[-1]).lower() im = cStringIO.StringIO(file.read()) # constructs a StringIO holding the image img = Image.open(im) filehash = checkhash(im) image = Uploads.objects.get(filehash=filehash) uuid = image.uuid return "%s" % (uuid) except Uploads.DoesNotExist: img.save(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))) del img filesize = os.stat(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))).st_size upload = Uploads( ip = request.META['REMOTE_ADDR'], filename = newfilename, uuid = randname, ext = ext, path = settings.UPLOAD_DIRECTORY, views = 1, bandwidth = filesize, source = request.POST['url'], size = filesize, filehash = filehash, ) upload.save() #return uuid return "%s" % (upload.uuid) except IOError, e: raise e Any ideas? Thanks! Wenbert

    Read the article

  • Polymorphic urls with singular resources

    - by Brendon Muir
    I'm getting strange output when using the following routing setup: resources :warranty_types do resources :decisions end resource :warranty_review, :only => [] do resources :decisions end I have many warranty_types but only one warranty_review (thus the singular route declaration). The decisions are polymorphically associated with both. I have just a single decisions controller and a single _form.html.haml partial to render the form for a decision. This is the view code: = simple_form_for @decision, :url => [@decision_tree_owner, @decision.becomes(Decision)] do |form| The warranty_type url looks like this (for a new decision): /warranty_types/2/decisions whereas the warranty_review url looks like this: /admin/warranty_review/decisions.1 I think because the warranty_review id has no where to go, it's just getting appended to the end as an extension. Can someone explain what's going on here and how I might be able to fix it? I can work around it by trying to detect for a warranty_review class and substituting @decision_tree_owner with :warranty_review and this generates the correct url, but this is messy. I would have thought that the routing would be smart enough to realise that warranty_review is a singular resource and thus discard the id from the URL. This is Rails 3 by the way :)

    Read the article

  • Ant: use include and exclude together

    - by Ken
    OK, this seems like it should be really simple. I'm using Apache Ant 1.8, and I have a target which does: <delete file="output/program.tar.bz2"/> <tar basedir="input" destfile="output/program.tar.bz2" compression="bzip2"> <tarfileset dir="input"> <include name="goodfolder1/**"/> <include name="goodfolder2/**"/> <exclude name="**/badfile"/> <exclude name="**/*.badext"/> </tarfileset> </tar> I want it to make a .tar.bz2 of input/goodfolder1 and input/goodfolder2, excluding files named "badfile", and excluding files with extension ".badext". It's giving me a .tar.bz2, but it's including badfile and *.badext -- the excludes seem to be ignored. The order of include/exclude doesn't seem to make a difference. I tried wrapping the includes/excludes in a (the docs say it's implicit?), but it made no difference. I'm sure there's something simple I'm missing, since the manual has a very similar example, though in a somewhat different context. EDIT: It looks like it could be related to the dir="input" attribute: it's adding everything in "input", and then adding everything in the tarfileset to that. Files I want appear twice in the program.tar.bz2, but files that are excluded only appear once. But dir is mandatory, and I don't see how this is different from the examples in the manual.

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • How to inherit the current path when invoking Maven's exec-maven-plugin?

    - by wishihadabettername
    I have an <exec-maven-plugin> which calls an external command (in this case, svnversion). The command is in the path for the current user. However, when a separate shell is spawned by the plugin, the path is not initialized. I don't want to hardcode or define a variable for each external command (there would be too much to maintain, especially that there are both Windows and *nix users). My pom.xml contains the following: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>svnversion-exec</id> <phase>process-resources</phase> <goals> <goal>exec</goal> </goals> <configuration> <executable>svnversion</executable> <arguments> <argument><![CDATA[ >version.txt ]]></argument> </arguments> </configuration> </execution> </executions> </plugin> Currently I get the following output: [INFO] [exec:exec {execution: svnversion-exec}] 'svnversion' is not recognized as an internal or external command, operable program or batch file. [ERROR] BUILD ERROR: Result of cmd.exe /X /C "svnversion >version.txt" execution is: '1'. Thank you!

    Read the article

  • Python modules not updating after restarting the main module.

    - by Ian
    I've recently come back to a project having had to stop for about 6 months, and after reinstalling my operating system and coming back to it I'm having all kinds of crazy things happen. I made sure to install the same version(2.6) of python that I was using before. It started by giving me strange tkinter error that I hadn't had trouble with before, the program is relatively simple and the 2 or 3 bugs that were left when i quit, I had documented and weren't related to the interface. Things got even weirder when the same error would pop up even after I had removed the offending section of code. In fact, the traceback pointed to a line that didn't even exist in the module it was referencing, eg: line 262 when the module was only 200 lines long. After just starting a completely new file for the main module and copy/pasting it finally recognized that the offending code was gone and I stopped getting the error only to find that any updates to the code I made in another module didn't show up when I restarted the program through the shell. (I didn't forget to save.) After fiddling with this, of course, the old interface error came back, only in a different section of code that had been working previously. In fact, if I revert back to the files I had six months ago, the program works fine. As soon as I change anything in the main module, however, the interface bug comes back. Here's the original error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\PyStuff\interface.py", line 202, in dispOne __main__.top.destroy() File "C:\Python26\lib\lib-tk\Tkinter.py", line 1938, in destroy self.tk.call('destroy', self._w) TclError: can't invoke "destroy" command: application has been destroyed I'm guessing something else is going on here other than my own poor programming. Anyone have any ideas?

    Read the article

  • Error installing FeedZirra

    - by Gautam
    Hi, I am new to Ruby on Rails. I am excited about Feed parsing but when I install FeedZirra I am getting this error. I use Windows 7 and Ruby 1.8.7. Please help. Thanks in advance. C:\Ruby187>gem sources -a http://gems.github.com http://gems.github.com added to sources C:\Ruby187>gem install pauldix-feedzirra Building native extensions. This could take a while... ERROR: Error installing pauldix-feedzirra: ERROR: Failed to build gem native extension. C:/Ruby187/bin/ruby.exe extconf.rb checking for curl-config... no checking for main() in -lcurl... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Ruby187/bin/ruby --with-curl-dir --without-curl-dir --with-curl-include --without-curl-include=${curl-dir}/include --with-curl-lib --without-curl-lib=${curl-dir}/lib --with-curllib --without-curllib extconf.rb:12: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0 .5.4.0 for inspection. Results logged to C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0.5.4.0/ext/gem_ma ke.out

    Read the article

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • Render multiple Form instances

    - by vorpyg
    I have a simple application where users are supposed to bet on outcome of a match. A match consists of two teams, a result and a stake. Matches with teams are created in the Django admin, and participants are to fill in result and stake. The form must be generated dynamically, based on the matches in the database. My idea is to have one (Django) Form instance for each match and pass these instances to the template. It works fine when I do it from django shell, but the instances aren't rendered when I load my view. The form looks like this: class SuggestionForm(forms.Form): def __init__(self, *args, **kwargs): try: match = kwargs.pop('match') except KeyError: pass super(SuggestionForm, self).__init__(*args, **kwargs) label = match self.fields['result'] = forms.ChoiceField(label=label, required=True, choices=CHOICES, widget=forms.RadioSelect()) self.fields['stake'] = forms.IntegerField(label='', required=True, max_value=50, min_value=10, initial=10) My (preliminary) view looks like this: def suggestion_form(request): matches = Match.objects.all() form_collection = {} for match in matches: f = SuggestionForm(request.POST or None, match=match) form_collection['match_%s' % match.id] = f return render_to_response('app/suggestion_form.html', { 'forms': form_collection, }, context_instance = RequestContext(request) ) My initial thought was that I could pass the form_collection to the template and the loop throught the collection like this, but id does not work: {% for form in forms %} {% for field in form %} {{ field }} {% endfor %} {% endfor %} (The output is actually the dict keys with added spaces in between each letter - I've no idea why…) It works if I only pass one Form instance to the template and only runs the inner loop. Suggestions are greatly appreciated.

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Is there a FAST way to export and install an app on my phone, while signing it with my own keystore?

    - by Alexei Andreev
    So, I've downloaded my own application from the market and installed it on my phone. Now, I am trying to install a temporary new version from Eclipse, but here is the message I get: Re-installation failed due to different application signatures. You must perform a full uninstall of the application. WARNING: This will remove the application data! Please execute 'adb uninstall com.applicationName' in a shell. Launch canceled! Now, I really really don't want to uninstall the application, because I will lose all my data. One solution I found is to Export my application, creating new .apk, and then install it via HTC Sync (probably a different program based on what phone you have). The problem is this takes a long time to do, since I need to enter the password for the keystore each time and then wait for HTC Sync. It's a pain in the ass! So the question is: Is there a way to make Eclipse automatically use my keystore to sign the application (quickly and automatically)? Or perhaps to replace debug keystore with my own? Or perhaps just tell it to remember the password, so I don't have to enter it every time...? Or some other way to solve this problem?

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >