Search Results

Search found 6152 results on 247 pages for 'known'.

Page 207/247 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • Xcode: how to know a header file is actually imported?

    - by Philip007
    To be specific, I am using RestKit framework. I want to use a framework class category called RKObjectManager+RKTableController in my view controller mainTVC. Here is my #import section in mainTVC.m: // framework headers, which should be enough #import <RestKit/RestKit.h> #import <RestKit/UI.h> // my project headers, not relating to framework #import "MainTVC.h" #import "Photo.h" // Do this to guarantee import does happen. But still got error, see below #import <RestKit/RKObjectManager+RKTableController.h> However, Xcode issue an error: No known class method for selector 'fetchRequest:groupedBy:inContext:' For reference, this method is a class method declared only in category header RKObjectManager+RKTableController.h, but not in 'RKObjectManager.h`. Also, I added -ObjC and -all_load to "other linker flags" in build settings, if that's relevant. I suspect the error is caused by the fact that category header is not actually imported somehow. How can I verify that? Or the error is caused by other reasons that I am not aware of. What am I doing wrong?

    Read the article

  • How to seek to a specific time in a RTP stream?

    - by Cipi
    I am streaming a prerecorded H264 video that has the following structure: [I] [x] [x] [x] [I] [x] [x] [x] [I]... In between the IDR (I-s in my structure) I have 32 (only 3 presented here) other frames (all other stuff that is not IDR like SEI, SPS, PPS... X-es) Now, let assume that the timing of my frames is such: TIME: 1 2 3 4 5 6 7 8 9 FRAME: [I] [x] [x] [x] [I] [x] [x] [x] [I]... Now i want to seek to the time 4. If I seek to that frame, and send it, the picture gets messed up because the decoder needs a IDR to decode it properly, so I resorted to finding the appropriate IDR (in this case one with the time 1) and sending it as the frame with the time 4. So now the picture is decoded properly, all is well... but... If my GOV is 32, and I need to send the non IDR frame that has the index 31, and if the time span between it and the corresponding IDR is 3 seconds, I actually get 3 seconds earlier then the time I want. Now, this is not precise, because I cannot seek to the half of the GOV time span. Also, I cant set smaller GOV, so I want other ideas... My other idea was to send the last known IDR, and then send all other non IDR frames that come before the one I want, only I would set for all of them RTP-TIME to be the same as the corresponding IDR. In this case the picture gets decoded perfectly, but now in the above case, 3 seconds that follow non IDR frame with the wanted time get fast paced in the decoder/player (there is no instantaneous seek)... Any ideas? Or I can only seek to IDR-s and not the frames in between?

    Read the article

  • CSS style submit like href tag

    - by seth.vargo
    Hi all, I have a button class that I wrote in CSS. It essentially displays block, adds some styles, etc. Whenever I add the class to a tags, it works fine - the a tag spans the entire width of its container like display:block should do... However, when I add the button class to an input button, Chrome, Safari, and Firefox all add a margin-right: 3px... I've used the DOM inspector in both Chrome and Safari and NO WHERE should it be adding a extra 3px padding. I tried adding margin: 0 !important; and/or margin-right: 0 !important to my button class in my CSS, but the browser STILL renders a 3px right margin! Is this a known issue, and is there a CSS-based solution (i.e. not jQuery/javascript) CODE FOLLOWS: .button { position: relative; display: block; margin: 0; border: 1px solid #369; color: #fff; font-weight: bold; padding: 11px 20px; line-height: 18px; text-align: center; text-transform: uppercase; cursor: hand; cursor: pointer; }

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Find files on a remote server

    - by Peter Kelly
    I have a web-service that resides on serverA. The webservice will be responsible for finding files of a certain type in a virtual directory on serverB and then returning the full URL to the files. I have the service working if the files are located on the same machine - this is straight-forward enough. My question is what is the best way to find all files of a certain type (say *.xml) in all directories below a known virtual directory on a remote server? So for example, the webservice is on http://ServerA/service.asmx and the virtual directory is located at http://serverB/virtualdirectory So in this code, obviously the DirectoryInfo will not take a path to the remote server - how do I access this so I can find the files it contains? How do I then get the full URL to a file found on that remote server? DirectoryInfo updateDirectory = new DirectoryInfo(path); FileInfo[] files = updateDirectory.GetFiles("*.xml", SearchOption.AllDirectories); foreach (FileInfo fileInfo in files) { // Get URL to the file } I cannot have the files and the service on the same server - IT decision that is out of my hands. Thanks!

    Read the article

  • When to use ellipsis after menu items

    - by Svish
    In pretty much all applications that have a menu bar, some of the items have an ellipsis (...) after them, and some don't. Is there a well known convention on when to put that ellipsis there and when not to? When do you do it? Do you do it? I have looked at various windows applications, and this is what I have come to: Ellipsis Menu items which opens a form that require user input to do something (Replace, Go to, Font) No ellipsis Menu items which just does something (Cut, Paste, Exit, Save) Menu items which opens a form that does not require user input (About, Check for Updates) But then there always seems to be menu items that doesn't follow this rule. For example the Help items (How do I, Search, Index) and the Find and Replace (Quick Find, Find in Files, Find Symbol) in Visual Studio. So after thinking about it a bit more I know think this might be the thing: Ellipsis Menu items that will definitely open a modal window. No Ellipsis Menu items that opens a non-modal window. Menu items that doesn't open any window. Menu items that most likely won't open a modal window (Like Save, which does open a modal window if you haven't saved before or something like that, but otherwise don't) What do you guys think?

    Read the article

  • SFINAE + sizeof = detect if expression compiles

    - by FredOverflow
    I just found out how to check if operator<< is provided for a type. template<class T> T& lvalue_of_type(); template<class T> T rvalue_of_type(); template<class T> struct is_printable { template<class U> static char test(char(*)[sizeof( lvalue_of_type<std::ostream>() << rvalue_of_type<U>() )]); template<class U> static long test(...); enum { value = 1 == sizeof test<T>(0) }; typedef boost::integral_constant<bool, value> type; }; Is this trick well-known, or have I just won the metaprogramming Nobel prize? ;) EDIT: I made the code simpler to understand and easier to adapt with two global function template declarations lvalue_of_type and rvalue_of_type.

    Read the article

  • WSACONNREFUSED when connecting to server

    - by Robert Mason
    I'm currently working on a server. I know that the client side is working (I can connect to www.google.com on port 80, for example), but the server is not functioning correctly. The socket has socket()ed, bind()ed, and listen()ed successfully and is on an accept loop. The only problem is that accept() doesn't seem to work. netstat shows that the server connection is running fine, as it prints the PID of the server process as LISTENING on the correct port. However, accept never returns. Accept just keeps running, and running, and if i try to connect to the port on localhost, i get a 10061 WSACONNREFUSED. I tried looping the connection, and it just keeps refusing connections until i hit ctrl+c. I put a breakpoint directly after the call to accept(), and no matter how many times i try to connect to that port, the breakpoint never fires. Why is accept not accepting connections? Has anyone else had this problem before? Known: [breakpoint0] if ((new_fd = accept(sockint, NULL, NULL)) == -1) { throw netlib::error("Accept Error"); //netlib::error : public std::exception } else { [breakpoint1] code...; } breakpoint0 is reached (and then continued through), no exception is thrown, and breakpoint1 is never reached. The client code is proven to work. Netstat shows that the socket is listening. If it means anything, i'm connecting to 127.0.0.1 on port 5842 (random number). The server is configured to run on 5842, and netstat confirms that the port is correct.

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • sqlalchemy dynamic mapping

    - by adancu
    Hi, I have the following problem: I have the class: class Word(object): def __init__(self): self.id = None self.columns = {} def __str__(self): return "(%s, %s)" % (str(self.id), str(self.columns)) self.columns is a dict which will hold (columnName:columnValue) values. The name of the columns are known at runtime and they are loaded in a wordColumns list, for example wordColumns = ['english', 'korean', 'romanian'] wordTable = Table('word', metadata, Column('id', Integer, primary_key = True) ) for columnName in wordColumns: wordTable.append_column(Column(columnName, String(255), nullable = False)) I even created a explicit mapper properties to "force" the table columns to be mapped on word.columns[columnName], instead of word.columnName, I don't get any error on mapping, but it seems that doesn't work. mapperProperties = {} for column in wordColumns: mapperProperties['columns[\'%']' % column] = wordTable.columns[column] mapper(Word, wordTable, mapperProperties) When I load a word object, SQLAlchemy creates an object which has the word.columns['english'], word.columns['korean'] etc. properties instead of loading them into word.columns dict. So for each column, it creates a new property. Moreover word.columns dictionary doesn't even exists. The same way, when I try to persist a word, SQLAlchemy expects to find the column values in properties named like word.columns['english'] (string type) instead of the dictionary word.columns. I have to say that my experience with Python and SQLAlchemy is quite limited, maybe it isn't possible to do what I'm trying to do. Any help appreciated, Thanks in advance.

    Read the article

  • NHibernate's ISQLQuery returns instances that are of an unexpected type.

    - by Frederik Gheysels
    Hi all, I'm using NHibernate 2.1.2.400, and I'm having an issue with a an ISQLQuery query. The reason why I use an ISQLQuery here, is that this query uses a table for which I have no entity mapped in NHibernate. The query looks like this: ISQLQuery query = session.CreateSQLQuery ( "select p.*, price.* " + "from prestation p left outer join prestationprice price on p.PrestationId = price.PrestationId " + "where p.Id IN ( select id from prestationregistry where ..."); 'Prestationregistry' is the table that is not known by NHibernate (unmapped, so therefore the native SQL Query). my code continues like this: query.AddEntity ("p", typeof(Prestation)); query.AddJoin ("price", typeof(PrestationPrice)); query.SetResultTransformer (Transformers.DistinctRootEntity); var result = query.List(); So far so good. I expect that I'm given a list of 'Prestation' instances as a result of this query, since I declared 'Prestation' as being the root-object that has to be returned by the AddEntity method. I also expect that the PrestationPrices for each Prestation are eagerly loaded by this query (hence the AddJoin method). To my surprise, the List() method returns a collection of PrestationPrice instances instead of Prestation instances. How come ? Am I doing something wrong ? And if so, could you be so kind to tell me what I'm doing wrong ? Edit: Additional Info: When I debug, and put a watch on the 'query' instance, I can see that the queryReturns member of the query contains 2 items: - one NativeSqlQueryRootReturn instance who'se ReturnEntityName is 'Prestation' - one NativeSqlQueryJoinReturn When I do not specify the 'DistinctRootEntity' result transformer, the query returns instances of 'Prestation' instead of PrestationPrice. However, it contains multiple copies of the same instance.

    Read the article

  • C++: defining maximum/minimum limits for a class

    - by Luis
    Basically what the title says... I have created a class that models time slots in a variable-granularity daily schedule (where for example the first time slot is 30 minutes, but the second time slot can be 40 minutes); the first available slot starts at (a value comparable to) 1. What I want to do now is to define somehow the maximum and minimum allowable values that this class takes and I have two practical questions in order to do so: 1.- does it make sense to define absolute minimum and maximum in such a way for a custom class? Or better, does it suffice that a value always compares as lower-than any other possible value of the type, given the class's defined relational operators, to be defined the min? (and analogusly for the max) 2.- assuming the previous question has an answer modeled after "yes" (or "yes but ..."), how do I define such max/min? I know that there is std::numeric_limits<> but from what I read it is intended for "numeric types". Do I interpret that as meaning "represented as a number" or can I make a broader assumption like "represented with numbers" or "having a correspondence to integers"? After all, it would make sense to define the minimum and maximum for a date class, and maybe for a dictionary class, but numeric_limits may not be intended for those uses (I don't have much experience with it). Plus, numeric_limits has a lot of extra members and information that I don't know what to make with. If I don't use numeric_limits, what other well-known / widely-used mechanism does C++ offer to indicate the available range of values for a class?

    Read the article

  • Is there a definitive reference document for Ruby syntax?

    - by JSW
    I'm searching for a definitive document on Ruby syntax. I know about the definitive documents for the core API and standard library, but what about the syntax itself? For instance, such a document should cover: reserved words, string literals syntax, naming rules for variables/classes/modules, all the conditional statements and their permutations, and so forth. I know there are many books and tutorials, yes, but every one of them is essentially a tutorial, each one having a range of different depth and focus. They will all, by necessity of brevity and narrative flow, omit certain details of the language that the author deems insignificant. For instance, did you know that you can use a case statement without an initial case value, and it will then execute the first true when clause? Any given Ruby book or tutorial may or may not cover that particular lesser-known functionality of the case syntax. It's not discussed in the section in "Programming Ruby" about case statements. But that is just one small example. So far the best documentation I've found is the rubyspec project, which appears to be an attempt to write a complete test suite for the language. That's not bad, but it's a bit hard to use from a practical standpoint as a developer working on my own projects. Am I just missing something or is there really no definitive readable document defining the whole of Ruby syntax?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Delphi fsstayontop oddity

    - by TallGuy
    Here is the deal. Main form set to fsnormal. This main form is maximized full screen with a floating toolbar. Toolbar is normal form with style set to fsstayontop. Most fo the time this works as expected. The mainform displays and the toolbar floats over on top of it. Sometimes (this is a bugger to find a reproducable set of steps) when alt-tabbing to and from other apps (or when clicking the delphi app icon on the taskbar) the following symptoms can happen... When alt-tabbing away from the delphi app the floating topmost fsstayontop form stays on top of the other apps. So if I alt-tab to firefox then the floating menu stays on top of firefox too. When alt-tabbing from another app to the delphi app the flaoting menu is not visible (as it is behhind the fsnormal mainform). Is there a known bug or any hacks to force it to work? This also seems to happen most when mutliple copies of the app are running (they have no interaction between them and should be running in their own windows "sandbox"). It is as if delphi gets confused which window is meant to be on top and swaps them or changes the floating form to stayontopofeverything mode. Or have I misunderstood fsstayontop? I am assuming setting a form style to fsstayontop makes it stay on top of all other forms within the current app and not all windows across other running apps. Thanks for any tips or workarounds.

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • How to read a parameter passed to a facelet from a backing bean

    - by Antonio
    Hi, I've written a facelet, and a corresponding backing bean, that implements user management (addition, deletion and so on). I'd want to be able to perform some custom processing when, for instance, a new user is added. There is a "create" button in the facelet, whose click event is handled by its backing bean. At the end of the event handler, I'd want to be able to call a method of another backing bean, which is not known because ideally the facelet can be used in several pages, with different custom processing. I thought to implement this feature by providing to the facelet a backing bean name and a method name, like this: <myfacelet:subaccounts backingBean="myBackingBean" createListener="createListener" /> and at the end of the event handler call #{myBackingBean.createListener} someway. I'm using this method (along with some overloads) to obtain a MethodExpression: protected MethodExpression getMethodExpression(String beanName, String methodName, Class<?> expectedReturnType, Class<?>[] expectedParamTypes) { ExpressionFactory expressionFactory; MethodExpression method; ELContext elContext; String el; el = String.format("#{%s['%s']}", beanName, methodName); expressionFactory = getApplication().getExpressionFactory(); elContext = getFacesContext().getELContext(); method = expressionFactory.createMethodExpression(elContext, el, expectedReturnType, expectedParamTypes); return method; } and the click event handler should look like: public void saveSubaccountListener(ActionEvent event) { MethodExpression method; ... method = getMethodExpression( "backingBean", "createSubaccountListener", SubuserBean.class); if (method != null) method.invoke( getFacesContext().getELContext(), new Object[] { _editedSubuser }); } That works fine as long as I provide an existing bean name (myBackingBean), but if I use backingBean the invoke() doesn't work due to the following error: javax.el.PropertyNotFoundException: Target Unreachable, identifier 'backingBean' resolved to null Is there a way I can retrieve from the facelet backing bean the value of a parameter that has been passed to the facelet? In my case, the value of backingBean, which should be myBackingBean? I've searched for and tried different solutions, but with no luck yet.

    Read the article

  • Is there a way to automatically load navigational property using the .NET Entity Framework?

    - by René Wolferink
    Stepping away more and more from writing SQL for my applications, I decided to give the Entity Framework a try. However, I've run into something I believe is causing me to write more code than I think is strictly necessary. When I accessed some navigational properties, I discovered that all many-to-one relations (simple references) were null and all one-to-many and many-to-many relations (EntityCollections) were empty. For example: I have a User with a reference to a Group. When I have retieved a User, by using a simple select-by-id, the Group property is null. If I want to access the Group I have to manually load it (using User.GroupReference.Load()). So I added a GetGroup() method in User which checks whether the Group is loaded already and, if not, does so and then returns the Group. Now this will result in a lot of highly similar methods for all navigational properties. And it all results in the navigational properties not being used, only my custom-made Get"PropertyName"() method's are now being used. I don't want to expand my queries (linq to entities) to immediately load all these properties, because it's not always known at first what is needed. And besides, it would cause a lot of queries to have to be made. Is there a way to configure the Entity Framework to load these objects when they happen to not be present? So when I access User.Group and the group is not loaded yet, it is loaded automatically? Or am I stuck using my own Get"PropertyName"() method's as long as I'm trying to load objects only on demand (or "just-in-time")? Some extra info: I'm using VS2008 SP1 with .NET 3.5 SP1. The Entity Framework I'm using is the one that got shipped with it.

    Read the article

  • How to get rid of exceptions thrown by the .NET Framework

    - by Hans Løken
    In a recent project I'm using a lot of databinding and xml-serialization. I'm using C#/VS2008 and have downloaded symbol information for the .NET framework to help me when debugging. The app I'm working on has a global "catch all" exception handler to present a more presentable messages to users if there happens to be any uncaught exceptions being thrown. My problem is when I turn on Exceptions-Thrown to be able to debug exceptions before they are caught by the "catch all". It seems to me that the framework throws a lot of exceptions that are not immediately caught (for example in ReflectPropertyDescriptor) so that the exception I'm actually trying to debug gets lost in the noise. Is there any way to get rid of exceptions caused by the framework but keep the ones from my own code? Update: after more research and actually trying to get rid of the exceptions that get thrown by the framework (many which turn out to be known issues in the framework, example: http://stackoverflow.com/questions/1127431/xmlserializer-giving-filenotfoundexception-at-constructor) I finally found a solution that works for me, which is turning on "Just my code" in Tools Options Debugging General Enable Just My Code in VS2008.

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >