Search Results

Search found 64 results on 3 pages for 'typecast'.

Page 3/3 | < Previous Page | 1 2 3 

  • Can I avoid explicitly casting objects with a common subclass?

    - by prendio2
    I have an iPodLibraryGroup object and Artist and Album both inherit from it. When it comes to my view controllers though I find that I'm duplicate lots of code, for example I have an ArtistListViewController and and AlbumListViewController even though they're both doing basically the same thing. The reason I've ended up duplicating the code is because these view controllers each refer to either an Artist object or al Album object and I'm not sure how to set it up so that one view controller could handle both — these view controllers are mainly accessing methods that that the objects have in common from iPodLibraryGroup. As an example, to hopefully make this clearer consider this code in AlbumListViewController: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { Album *album = nil; album = [self albumForRowAtIndexPath:indexPath inTableView:tableView]; … if (!album.thumbnail) { [self startThumbnailDownload:album forIndexPath:indexPath inTableView:tableView]; cell.imageView.image = [UIImage imageNamed:@"Placeholder.png"]; } else { cell.imageView.image = album.thumbnail; } return cell; } This is essentially completely repeated (along with a hell of a lot more repeated code) in ArtistListViewController just so that I can typecast the local variable as an Artist instead of an Album. Is there a way to not explicitly need to set Artist or Album here so that the same code could work for any object that is a child of iPodLibraryGroup?

    Read the article

  • using STI and ActiveRecordBase<> with full FindAll

    - by oillio
    Is it possible to use generic support with single table inheritance, and still be able to FindAll of the base class? As a bonus question, will I be able to use ActiveRecordLinqBase< as well? I do love those queries. More detail: Say I have the following classes defined: public interface ICompany { int ID { get; set; } string Name { get; set; } } [ActiveRecord("companies", DiscriminatorColumn="type", DiscriminatorType="String", DiscriminatorValue="NA")] public abstract class Company<T> : ActiveRecordBase<T>, ICompany { [PrimaryKey] private int Id { get; set; } [Property] public String Name { get; set; } } [ActiveRecord(DiscriminatorValue="firm")] public class Firm : Company<Firm> { [Property] public string Description { get; set; } } [ActiveRecord(DiscriminatorValue="client")] public class Client : Company<Client> { [Property] public int ChargeRate { get; set; } } This works fine for most cases. I can do things like: var x = Client.FindAll(); But sometimes I want all of the companies. If I was not using generics I could do: var x = (Company[]) FindAll(Company); Client a = (Client)x[0]; Firm b = (Firm)x[1]; Is there a way to write a FindAll that returns an array of ICompany's that can then be typecast into their respective types? Something like: var x = (ICompany[]) FindAll(Company<ICompany>); Client a = (Client)x[0]; Or maybe I am going about implementing the generic support all wrong?

    Read the article

  • Learning Java, how to type text on canvas?

    - by Voley
    I'm reading a book by Eric Roberts - Art and science of java and it got an excersise that I can't figure out - You have to make calendar, with GRect's, 7 by 6, that goes ok, the code part is easy, but also you have to type the numbers of the date on those rectangles, and it's kinda hard for me, there is nothing about it in the book. I tried using GLabel thing, but here arises the problem that I need to work on those numbers, and it says "can't convert from int to string and vice versa". GLabel (string, posX, posY) - it is not accepting int as a parameter, only string, I even tried typecasting, still not working. For example I want to make a loop int currentDate = 1; while (currentDate < 31) { add(new Glabel(currentDate, 100, 100); currentDate++; This code is saying that no man, can't convert int to string. If i try changing currentDate to string, it works, but I got a problem with calculation, as I can't manipulate with number in string, it doesn't even allow to typecast it into int. How can I fix it? Maybe there is another class or method to type the text over those rectangles? I know about println but it doen't have any x or y coordinates, so I can't work with it. And I think it's only for console programs.

    Read the article

  • How do i access EJB implementing remote interface in separate web application?

    - by Nitesh Panchal
    Hello, I am using Netbeans 6.8 and Glassfish v3.0. I created an ejb module and created entity classes from database and then created stateless session bean with remote interface. Say eg. @Remote public interface customerRemote{ public void add(String name, String address); public Customer find(Integer id); } @Stateless public class customerBean implements customerRemote{ //implementations of methods } Then i created a new web application. But now how do i access remote ejb's in my web application. I could lookup a bean with jndi name but what i want to know is, what type of object it will return? How do i typecast it in customerRemote? I don't have any class named customerRemote in my web application. So, how do i do it? Also, what about the entity class Customer? There is no such class named Customer in my web application also. All ejb's and entity classes are in separate ejb module. Please help me :(

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • PHP variable equals true no matter what the value, even 0

    - by kaigoh
    This is the var_dump: object(stdClass)#27 (8) { ["SETTING_ID"]=> string(2) "25" ["SETTING_SOURCE"]=> string(2) "XV" ["SETTING_FLEET"]=> string(3) "313" ["SETTING_EXAM"]=> string(1) "A" ["SETTING_HIDE"]=> string(1) "0" ["SETTING_THRESHOLD"]=> string(1) "0" ["SETTING_COUNT"]=> string(8) "POSITIVE" ["SETTING_USAGE"]=> string(7) "MILEAGE" } The variable I am testing is SETTING_HIDE. This is being pulled from MySQL using the Code igniter framework. I don't know if it is just me being thick after a rather long day at work or what, but no matter what value the variable holds, any if statement made against it returns true, even when typecast as a boolean or a direct string comparison, ie. == "0" or == "1". Anyone with a fresh pair of eyes care to make me feel silly!?! :) Just to clarify: Have tried the following: if($examSetting->SETTING_HIDE == "1") { $showOnABC = "checked=\"checked\""; } if((bool)$examSetting->SETTING_HIDE) { $showOnABC = "checked=\"checked\""; } if($examSetting->SETTING_COUNT == "POSITIVE") further on in my code works perfectly.

    Read the article

  • C++ Problem: Class Promotion using derived class

    - by Michael Fitzpatrick
    I have a class for Float32 that is derived from Float32_base class Float32_base { public: // Constructors Float32_base(float x) : value(x) {}; Float32_base(void) : value(0) {}; operator float32(void) {return value;}; Float32_base operator =(float x) {value = x; return *this;}; Float32_base operator +(float x) const { return value + x;}; protected: float value; } class Float32 : public Float32_base { public: float Tad() { return value + .01; } } int main() { Float32 x, y, z; x = 1; y = 2; // WILL NOT COMPILE! z = (x + y).Tad(); // COMPILES OK z = ((Float32)(x + y)).Tad(); } The issue is that the + operator returns a Float32_base and Tad() is not in that class. But 'x' and 'y' are Float32's. Is there a way that I can get the code in the first line to compile without having to resort to a typecast like I did on the next line?

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • Windows API Programing....

    - by vs4vijay
    Hello There... Its me Vijay.. I m Trying to make a CrossHair(some kind of cursor) On The Screen while running a Game (Counter Strike)... so i did this... ############################# #include<iostream.h> #include<windows.h> #include<conio.h> #include<dos.h> #include<stdlib.h> #include<process.h> #include <time.h> int main() { HANDLE hl = OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); // Here pid is the process ID of the Game... HDC hDC = GetDC(NULL); //Here i pass NULL for Entire Screen... HBRUSH hb=CreateSolidBrush(RGB(0,255,255)); SelectObject(hDC,hb); POINT p; while(!kbhit()) { int x=1360/2,y=768/2; MoveToEx(hDC,x-20,y,&p); LineTo(hDC,x+20,y); SetPixel(hDC,x,y,RGB(255,0,0)); SetPixel(hDC,x-1,y-1,RGB(255,0,0)); SetPixel(hDC,x-1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y-1,RGB(255,0,0)); MoveToEx(hDC,x,y-20,&p); LineTo(hDC,x,y+20); } cin.get(); return 0; } #################################### it works fine....at desktop i see crosshair...but my problem is that when i run game...the cross here got disappeared.... so i think i did not handle the process of game... so i pass the HANDLE to the GetDC(hl)... But GetDC take only HWND(Handle To Window)... so i typecast it like this... HWND hl = (HWND)OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); and passed hl to the GetDC(hl)... but it doesnt work...Whats wrong with the code... plz tell me how do i make a simple shape at the screen on a process or game... PS : (My Compiler Is DevCPP and OS WinXP SP3....)

    Read the article

  • Cannot .Count() on IQueryable (NHibernate)

    - by Bruno Reis
    Hello, I'm with an irritating problem. It might be something stupid, but I couldn't find out. I'm using Linq to NHibernate, and I would like to count how many items are there in a repository. Here is a very simplified definition of my repository, with the code that matters: public class Repository { private ISession session; /* ... */ public virtual IQueryable<Product> GetAll() { return session.Linq<Product>(); } } All the relevant code in the end of the question. Then, to count the items on my repository, I do something like: var total = productRepository.GetAll().Count(); The problem is that total is 0. Always. However there are items in the repository. Furthermore, I can .Get(id) any of them. My NHibernate log shows that the following query was executed: SELECT count(*) as y0_ FROM [Product] this_ WHERE not (1=1) That must be that "WHERE not (1=1)" clause the cause of this problem. What can I do to be able .Count() the items in my repository? Thanks! EDIT: Actually the repository.GetAll() code is a little bit different... and that might change something! It is actually a generic repository for Entities. Some of the entities implement also the ILogicalDeletable interface (it contains a single bool property "IsDeleted"). Just before the "return" inside the GetAll() method I check if if the Entity I'm querying implements ILogicalDeletable. public interface IRepository<TEntity, TId> where TEntity : Entity<TEntity, TId> { IQueryable<TEntity> GetAll(); ... } public abstract class Repository<TEntity, TId> : IRepository<TEntity, TId> where TEntity : Entity<TEntity, TId> { public virtual IQueryable<TEntity> GetAll() { if (typeof (ILogicalDeletable).IsAssignableFrom(typeof (TEntity))) { return session.Linq<TEntity>() .Where(x => (x as ILogicalDeletable).IsDeleted == false); } else { return session.Linq<TEntity>(); } } } public interface ILogicalDeletable { bool IsDeleted {get; set;} } public Product : Entity<Product, int>, ILogicalDeletable { ... } public IProductRepository : IRepository<Product, int> {} public ProductRepository : Repository<Product, int>, IProductRepository {} Edit 2: actually the .GetAll() is always returning an empty result-set for entities that implement the ILogicalDeletable interface (ie, it ALWAYS add a WHERE NOT (1=1) clause. I think Linq to NHibernate does not like the typecast.

    Read the article

  • C program runs in Cygwin but not Linux (Malloc)

    - by Shawn
    I have a heap allocation error that I cant spot in my code that is picked up on vanguard/gdb on Linux but runs perfectly on a Windows cygwin environment. I understand that Linux could be tighter with its heap allocation than Windows but I would really like to have a response that discovers the issue/possible fix. I'm also aware that I shouldn't typecast malloc in C but it's a force of habit and doesn't change my problem from happening. My program actually compiles without error on both Linux & Windows but when I run it in Linux I get a scary looking result: malloc.c:3074: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)-bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)-size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed. Aborted Attached snippet from my code that is being pointed to as the error for review: /* Main */ int main(int argc, char * argv[]) { FILE *pFile; unsigned char *buffer; long int lSize; pFile = fopen ( argv[1] , "r" ); if (pFile==NULL) {fputs ("File error on arg[1]",stderr); return 1;} fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); buffer = (char*) malloc(sizeof(char) * lSize+1); if (buffer == NULL) {fputs ("Memory error",stderr); return 2;} bitpair * ppairs = (bitpair *) malloc(sizeof(bitpair) * (lSize+1)); //line 51 below calcpair(ppairs, (lSize+1)); /* irrelevant stuff */ fclose(pFile); free(buffer); free(ppairs); } typedef struct { long unsigned int a; //not actual variable names... Yes I need them to be long unsigned long unsigned int b; long unsigned int c; long unsigned int d; long unsigned int e; } bitpair; void calcpair(bitpair * ppairs, long int bits); void calcPairs(bitpair * ppairs, long int bits) { long int i, top, bot, var_1, var_2; int count = 0; for(i = 0; i < cs; i++) { top = 0; ppairs[top].e = 1; do { bot = count; count++; } while(ppairs[bot].e != 0); ppairs[bot].e = 1; var_1 = bot; var_2 = top; calcpair * bp = &ppairs[var_2]; bp->a = var_2; bp->b = var_1; bp->c = i; bp = &ppairs[var_1]; bp->a = var_2; bp->b = var_1; bp->c = i; } return; } gdb reports: free(): invalid pointer: 0x0000000000603290 * valgrind reports the following message 5 times before exiting due to "VALGRIND INTERNAL ERROR" signal 11 (SIGSEGV): Invalid read of size 8 ==2727== at 0x401043: calcPairs (in /home/user/Documents/5-3/ubuntu test/main) ==2727== by 0x400C9A: main (main.c:51) ==2727== Address 0x5a607a0 is not stack'd, malloc'd or (recently) free'd

    Read the article

  • Reading serialised object from file

    - by nico
    Hi everyone. I'm writing a little Java program (it's an ImageJ plugin, but the problem is not specifically ImageJ related) and I have some problem, most probably due to the fact that I never really programmed in Java before... So, I have a Vector of Vectors and I'm trying to save it to a file and read it. The variable is defined as: Vector <Vector <myROI> > ROIs = new Vector <Vector <myROI> >(); where myROI is a class that I previously defined. Now, to write the vector to a file I use: void saveROIs() { SaveDialog save = new SaveDialog("Save ROIs...", imp.getTitle(), ".xroi"); String name = save.getFileName(); if (name == null) return; String dir = save.getDirectory(); try { FileOutputStream fos = new FileOutputStream(dir+name); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(ROIs); oos.close(); } catch (Exception e) { IJ.log(e.toString()); } } This correctly generates a binary file containing (I suppose) the object ROIs. Now, I use a very similar code to read the file: void loadROIs() { OpenDialog open = new OpenDialog("Load ROIs...", imp.getTitle(), ".xroi"); String name = open.getFileName(); if (name == null) return; String dir = open.getDirectory(); try { FileInputStream fin = new FileInputStream(dir+name); ObjectInputStream ois = new ObjectInputStream(fin); ROIs = (Vector <Vector <myROI> >) ois.readObject(); // This gives error ois.close(); } catch (Exception e) { IJ.log(e.toString()); } } But this function does not work. First, I get a warning: warning: [unchecked] unchecked cast found : java.lang.Object required: java.util.Vector<java.util.Vector<myROI>> ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ I Googled for that and see that I can suppress by prepending @SuppressWarnings("unchecked"), but this just makes things worst, as I get an error: <identifier> expected ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ In any case, if I omit @SuppressWarnings and ignore the warning, the object is not read and an exception is thrown java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: myROI Again Google tells me myROI needs to implements Serializable. I tried just adding implements Serializable to the class definition, but it is not sufficient. Can anyone give me some hints on how to procede in this case? Also, how to get rid of the typecast warning?

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 1 2 3