Search Results

Search found 14605 results on 585 pages for 'variable definition'.

Page 562/585 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • Super constructor must be a first statement in Java constructor [closed]

    - by Val
    I know the answer: "we need rules to prevent shooting into your own foot". Ok, I make millions of programming mistakes every day. To be prevented, we need one simple rule: prohibit all JLS and do not use Java. If we explain everything by "not shooting your foot", this is reasonable. But there is not much reason is such reason. When I programmed in Delphy, I always wanted the compiler to check me if I read uninitializable. I have discovered myself that is is stupid to read uncertain variable because it leads unpredictable result and is errorenous obviously. By just looking at the code I could see if there is an error. I wished if compiler could do this job. It is also a reliable signal of programming error if function does not return any value. But I never wanted it do enforce me the super constructor first. Why? You say that constructors just initialize fields. Super fields are derived; extra fields are introduced. From the goal point of view, it does not matter in which order you initialize the variables. I have studied parallel architectures and can say that all the fields can even be assigned in parallel... What? Do you want to use the unitialized fields? Stupid people always want to take away our freedoms and break the JLS rules the God gives to us! Please, policeman, take away that person! Where do I say so? I'm just saying only about initializing/assigning, not using the fields. Java compiler already defends me from the mistake of accessing notinitialized. Some cases sneak but this example shows how this stupid rule does not save us from the read-accessing incompletely initialized in construction: public class BadSuper { String field; public String toString() { return "field = " + field; } public BadSuper(String val) { field = val; // yea, superfirst does not protect from accessing // inconstructed subclass fields. Subclass constr // must be called before super()! System.err.println(this); } } public class BadPost extends BadSuper { Object o; public BadPost(Object o) { super("str"); this. o = o; } public String toString() { // superconstructor will boom here, because o is not initialized! return super.toString() + ", obj = " + o.toString(); } public static void main(String[] args) { new BadSuper("test 1"); new BadPost(new Object()); } } It shows that actually, subfields have to be inilialized before the supreclass! Meantime, java requirement "saves" us from writing specializing the class by specializing what the super constructor argument is, public class MyKryo extends Kryo { class MyClassResolver extends DefaultClassResolver { public Registration register(Registration registration) { System.out.println(MyKryo.this.getDepth()); return super.register(registration); } } MyKryo() { // cannot instantiate MyClassResolver in super super(new MyClassResolver(), new MapReferenceResolver()); } } Try to make it compilable. It is always pain. Especially, when you cannot assign the argument later. Initialization order is not important for initialization in general. I could understand that you should not use super methods before initializing super. But, the requirement for super to be the first statement is different. It only saves you from the code that does useful things simply. I do not see how this adds safety. Actually, safety is degraded because we need to use ugly workarounds. Doing post-initialization, outside the constructors also degrades safety (otherwise, why do we need constructors?) and defeats the java final safety reenforcer. To conclude Reading not initialized is a bug. Initialization order is not important from the computer science point of view. Doing initalization or computations in different order is not a bug. Reenforcing read-access to not initialized is good but compilers fail to detect all such bugs Making super the first does not solve the problem as it "Prevents" shooting into right things but not into the foot It requires to invent workarounds, where, because of complexity of analysis, it is easier to shoot into the foot doing post-initialization outside the constructors degrades safety (otherwise, why do we need constructors?) and that degrade safety by defeating final access modifier When there was java forum alive, java bigots attecked me for these thoughts. Particularly, they dislaked that fields can be initialized in parallel, saying that natural development ensures correctness. When I replied that you could use an advanced engineering to create a human right away, without "developing" any ape first, and it still be an ape, they stopped to listen me. Cos modern technology cannot afford it. Ok, Take something simpler. How do you produce a Renault? Should you construct an Automobile first? No, you start by producing a Renault and, once completed, you'll see that this is an automobile. So, the requirement to produce fields in "natural order" is unnatural. In case of alarmclock or armchair, which are still chair and clock, you may need first develop the base (clock and chair) and then add extra. So, I can have examples where superfields must be initialized first and, oppositely, when they need to be initialized later. The order does not exist in advance. So, the compiler cannot be aware of the proper order. Only programmer/constructor knows is. Compiler should not take more responsibility and enforce the wrong order onto programmer. Saying that I cannot initialize some fields because I did not ininialized the others is like "you cannot initialize the thing because it is not initialized". This is a kind of argument we have. So, to conclude once more, the feature that "protects" me from doing things in simple and right way in order to enforce something that does not add noticeably to the bug elimination at that is a strongly negative thing and it pisses me off, altogether with the all the arguments to support it I've seen so far. It is "a conceptual question about software development" Should there be the requirement to call super() first or not. I do not know. If you do or have an idea, you have place to answer. I think that I have provided enough arguments against this feature. Lets appreciate the ones who benefit form it. Let it just be something more than simple abstract and stupid "write your own language" or "protection" kind of argument. Why do we need it in the language that I am going to develop?

    Read the article

  • Making Those PanelBoxes Behave

    - by Duncan Mills
    I have a little problem to solve earlier this week - misbehaving <af:panelBox> components... What do I mean by that? Well here's the scenario, I have a page fragment containing a set of panelBoxes arranged vertically. As it happens, they are stamped out in a loop but that does not really matter. What I want to be able to do is to provide the user with a simple UI to close and open all of the panelBoxes in concert. This could also apply to showDetailHeader and similar items with a disclosed attrubute, but in this case it's good old panelBoxes.  Ok, so the basic solution to this should be self evident. I can set up a suitable scoped managed bean that the panelBoxes all refer to for their disclosed attribute state. Then the open all / close commandButtons in the UI can simply set the state of that bean for all the panelBoxes to pick up via EL on their disclosed attribute. Sound OK? Well that works basically without a hitch, but turns out that there is a slight problem and this is where the framework is attempting to be a little too helpful. The issue is that is the user manually discloses or hides a panelBox then that will override the value that the EL is setting. So for example. I start the page with all panelBoxes collapsed, all set by the EL state I'm storing on the session I manually disclose panelBox no 1. I press the Expand All button - all works as you would hope and all the panelBoxes are now disclosed, including of course panelBox 1 which I just expanded manually. Finally I press the Collapse All button and everything collapses except that first panelBox that I manually disclosed.  The problem is that the component remembers this manual disclosure and that overrides the value provided by the expression. If I change the viewId (navigate away and back) then the panelBox will start to behave again, until of course I touch it again! Now, the more astute amoungst you would think (as I did) Ah, sound like the MDS personalizaton stuff is getting in the way and the solution should simply be to set the dontPersist attribute to disclosed | ALL. Alas this does not fix the issue.  After a little noodling on the best way to approach this I came up with a solution that works well, although if you think of an alternative way do let me know. The principle is simple. In the disclosureListener for the panelBox I take a note of the clientID of the panelBox component that has been touched by the user along with the state. This all gets stored in a Map of Booleans in ViewScope which is keyed by clientID and stores the current disclosed state in the Boolean value.  The listener looks like this (it's held in a request scope backing bean for the page): public void handlePBDisclosureEvent(DisclosureEvent disclosureEvent) { String clientId = disclosureEvent.getComponent().getClientId(FacesContext.getCurrentInstance()); boolean state = disclosureEvent.isExpanded(); pbState.addTouchedPanelBox(clientId, state); } The pbState variable referenced here is a reference to the bean which will hold the state of the panelBoxes that lives in viewScope (recall that everything is re-set when the viewid is changed so keeping this in viewScope is just fine and cleans things up automatically). The addTouchedPanelBox() method looks like this: public void addTouchedPanelBox(String clientId, boolean state) { //create the cache if needed this is just a Map<String,Boolean> if (_touchedPanelBoxState == null) { _touchedPanelBoxState = new HashMap<String, Boolean>(); } // Simply put / replace _touchedPanelBoxState.put(clientId, state); } So that's the first part, we now have a record of every panelBox that the user has touched. So what do we do when the Collapse All or Expand All buttons are pressed? Here we do some JavaScript magic. Basically for each clientID that we have stored away, we issue a client side disclosure event from JavaScript - just as if the user had gone back and changed it manually. So here's the Collapse All button action: public String CloseAllAction() { submitDiscloseOverride(pbState.getTouchedClientIds(true), false); _uiManager.closeAllBoxes(); return null; }  The _uiManager.closeAllBoxes() method is just manipulating the master-state that all of the panelBoxes are bound to using EL. The interesting bit though is the line:  submitDiscloseOverride(pbState.getTouchedClientIds(true), false); To break that down, the first part is a call to that viewScoped state holder to ask for a list of clientIDs that need to be "tweaked": public String getTouchedClientIds(boolean targetState) { StringBuilder sb = new StringBuilder(); if (_touchedPanelBoxState != null && _touchedPanelBoxState.size() > 0) { for (Map.Entry<String, Boolean> entry : _touchedPanelBoxState.entrySet()) { if (entry.getValue() == targetState) { if (sb.length() > 0) { sb.append(','); } sb.append(entry.getKey()); } } } return sb.toString(); } You'll notice that this method only processes those panelBoxes that will be in the wrong state and returns those as a comma separated list. This is then processed by the submitDiscloseOverride() method: private void submitDiscloseOverride(String clientIdList, boolean targetDisclosureState) { if (clientIdList != null && clientIdList.length() > 0) { FacesContext fctx = FacesContext.getCurrentInstance(); StringBuilder script = new StringBuilder(); script.append("overrideDiscloseHandler('"); script.append(clientIdList); script.append("',"); script.append(targetDisclosureState); script.append(");"); Service.getRenderKitService(fctx, ExtendedRenderKitService.class).addScript(fctx, script.toString()); } } This method constructs a JavaScript command to call a routine called overrideDiscloseHandler() in a script attached to the page (using the standard <af:resource> tag). That method parses out the list of clientIDs and sends the correct message to each one: function overrideDiscloseHandler(clientIdList, newState) { AdfLogger.LOGGER.logMessage(AdfLogger.INFO, "Disclosure Hander newState " + newState + " Called with: " + clientIdList); //Parse out the list of clientIds var clientIdArray = clientIdList.split(','); for (var i = 0; i < clientIdArray.length; i++){ var panelBox = flipPanel = AdfPage.PAGE.findComponentByAbsoluteId(clientIdArray[i]); if (panelBox.getComponentType() == "oracle.adf.RichPanelBox"){ panelBox.broadcast(new AdfDisclosureEvent(panelBox, newState)); } }  }  So there you go. You can see how, with a few tweaks the same code could be used for other components with disclosure that might suffer from the same problem, although I'd point out that the behavior I'm working around here us usually desirable. You can download the running example (11.1.2.2) from here. 

    Read the article

  • CodePlex Daily Summary for Wednesday, November 06, 2013

    CodePlex Daily Summary for Wednesday, November 06, 2013Popular ReleasesWin_8 (??? Devel Studio 2 ??? 3): Win8 0.8 + Sample (.dvs): ???????------------------------------------------ 1. ????????? ??????????? ????????? ??????. 2. ?????????? ???? , ????????? ? ?????????? ???? ????. 3. ?????????? ?????? ??????. 4. ?????????? ????????? ???????????. 5. ????????????? ??? . English----------------------------------------------------------------------- 1. Added ability to load icons. 2. Fixed bugs related to obtaining names of shapes. 3. Fixed a memory leak. 4. Fixed some defects. 5. Optimized code. ---------------------------...NWTCompiler: NWTCompiler v2.4.0: This version fixes a bug with Pinyin and adds support for the 2013 English NWT.ConEmu - Windows console with tabs: ConEmu 131105 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.Community Forums NNTP bridge: Community Forums NNTP Bridge V55 (LiveConnect): This is a which can be used with the new LiveConnect authentication and the MSDN forums. It fixed a bug where the authentication does not work after 1 hour. A logfile will be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt" or attached it to a issue.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 - 1.0.0.0: Set the default view for a user for a particular entity based on security role Hide/ Show views for a user for a particular entity based on his security role Choose the preferred role for a user for view configuration when the user have more than one security role in the system. Ability to exclude/ include a user from view configuration as per business requirementsDuplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category. Updated database script file. Now compatible with lower version of SQL Server.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...New Projects.Net PG: Just university projectA2D: 1. Cache System 2. Event System 3. IoC 4. Sql Dispatcher System 5. Session System 6. ???Command Bus 7. ????Advantage Browser: A web browser made in Visual Basic 2010 for all to add and learn from or just use.BarCoder: BarCoder is C# Web app for creating EAN-8 and EAN-16 bar codec in vector graphic image format.CLIDE .NET: CLIDE .NET The Command Line IDE for .NET Because Code is just CodeDevcken Java Library: Devcken's Java LibraryDouble Sides Flipping Control - Windows Phone: Double Sides Flipping Control The Control features the following: Two Sides Control which flipping across its center in Both Directions based on horizontal geFigTree FHMS (Funeral Home Management System): FigTree FHMS (Funeral Home Management System) application is outfitted for daily operations of your funeral home.InChatter: InChatter is a Instant messaging communication module for a desktop application programmed by C#. And the server is a WCF program.Mod.Training: Some helpful examples about Orchardy stuffPlupload MVC4 Demo: This project shows how to implement Plupload with an MVC applicationQuality Control Management System: QCMS is a web-based Test Management systemQuan Ly Nha Thuoc: Project Qu?n Lý Ti?m Thu?c Tây Email: phuoc.nh2953@sinhvien.hoasen.edu.vnRandomchaos DX11 Engine: An open source C++ DX 11 EngineService Gateway: The service gateway enables composition and agility for web sites and services.Sortable objects: Sortable objects server and client sideSTSADM ExportCrawlLog - SP Foundation 2010: STSADM extension to see/export SharePoint Fiundation 2010 MSSearch configuration and Crawl LogsTACACS Plus Extended: tacacs+ updates to support subnet specific configurations and reduced configuration with ldap access.TKinect: Framework for Testing Kinect ApplicationsWebApi Data Wrapper: This project - a collection of wrapper for WebAPI.WPF ExpressionEditor: Control representing expression editor. Allows usage of custom expression parsers.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 5

    - by nikolaosk
    ?his is the fifth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line. There are 4 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.You can find the fourth here.  In this new post we will build on the previous posts and we will demonstrate how to display the details of a poster when the user clicks on an individual poster photo/link. We will add a FormView control on a web form and will bind data from the database. FormView is a great web server control for displaying the details of a single record. 1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterDetails.aspx 3) Open the PosterDetails.aspx page. We will add some markup in this page. Have a look at the code below <asp:Content ID="Content2" ContentPlaceHolderID="FeaturedContent" runat="server">    <asp:FormView ID="posterDetails" runat="server" ItemType="PostersOnLine.DAL.Poster" SelectMethod ="GetPosterDetails">        <ItemTemplate>            <div>                <h1><%#:Item.PosterName %></h1>            </div>            <br />            <table>                <tr>                    <td>                        <img src="<%#:Item.PosterImgpath %>" border="1" alt="<%#:Item.PosterName %>" height="300" />                    </td>                    <td style="vertical-align: top">                        <b>Description:</b><br /><%#:Item.PosterDescription %>                        <br />                        <span><b>Price:</b>&nbsp;<%#: String.Format("{0:c}", Item.PosterPrice) %></span>                        <br />                        <span><b>Poster Number:</b>&nbsp;<%#:Item.PosterID %></span>                        <br />                    </td>                </tr>            </table>        </ItemTemplate>    </asp:FormView></asp:Content> I set the ItemType property to the Poster entity class and the SelectMethod to the GetPosterDetails method.The Item binding expression is available and we can retrieve properties of the Poster object.I retrieve the name, the image,the description and the price of each poster. 4) Now we need to write the GetPosterDetails method.In the code behind of the PosterDetails.aspx page we type public IQueryable<Poster> GetPosterDetails([QueryString("PosterID")]int? posterid)        {                    PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (posterid.HasValue && posterid > 0)            {                query = query.Where(p => p.PosterID == posterid);            }            else            {                query = null;            }            return query;        } I bind the value from the query string to the posterid parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable PosterID.If there is a matching poster it is fetched from the database.If not,there is no data at all coming back from the database. 5) I run my application and then click on the "Midfielders" link.Then click on the first poster that appears from the left (Kenny Dalglish) and click on it to see the details. Have a look at the picture below to see the results.   You can see that now I have all the details of the poster in a new page.?ake sure you place breakpoints in the code so you can see what is really going on. Hope it helps!!!

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Make a lives display in HUD, Flash AS3 (not text!)

    - by user40404
    I've been searching the internet all day and I can't find the answer I'm looking for. In my HUD I want to use orange dots to represent lives. The user starts off with 5 lives and every time they die, I want a dot to be removed. Pretty straight forward. So far my idea is to make a movie clip that has the five dots in a line. There would be 5 frames on the timeline (because after the last life it goes to a game over screen right away). I would have a variable set up to store the number of lives and a function to keep track of lives. So every hit of an obstacle would result in livesCounter--;. Then I would set up something like this: switch(livesCounter){ case 5: livesDisplay.gotoAndPlay(1); break; case 4: livesDisplay.gotoAndPlay(2); break; case 3: livesDisplay.gotoAndPlay(3); break; case 2: livesDisplay.gotoAndPlay(4); break; case 1: livesDisplay.gotoAndPlay(5); break; } I feel like there has to be an easier way to do this where I could just have a movie clip of a single orange dot that I could replicate across an x value based on the number of lives. Maybe the dots would be stored in an array? When the user loses a life, a dot on the right end of the line is removed. So in the end the counter would look like this: * * * * * * * * * * * * * * * (last life lost results in the end game screen) EDIT: code based on suggestions by Zhafur and Arthur Wolf White package { import flash.display.MovieClip; import flash.events.*; import flash.ui.Multitouch; import flash.ui.MultitouchInputMode; import flash.display.Sprite; import flash.text.*; import flash.utils.getTimer; public class CollisionMouse extends MovieClip{ public var mySprite:Sprite = new Sprite(); Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT; public var replacement:newSprite = new newSprite; public var score:int = 0; public var obstScore:int = -50; public var targetScore:int = 200; public var startTime:uint = 0; public var gameTime:uint; public var pauseScreen:PauseScreen = new PauseScreen(); public var hitTarget:Boolean = false; public var hitObj:Boolean = false; public var currLevel:Number = 1; public var heroLives:int = 5; public var life:Sprite; public function CollisionMouse() { mySprite.graphics.beginFill(0xff0000); mySprite.graphics.drawRect(0,0,40,40); addChild(mySprite); mySprite.x = 200; mySprite.y = 200; pauseScreen.x = stage.width/2; pauseScreen.y = stage.height/2; life = new Sprite(); life.x = 210; stage.addEventListener(MouseEvent.MOUSE_MOVE,followMouse); /*mySprite.addEventListener(TouchEvent.TOUCH_END, onTouchEnd);*/ //checkLevel(); timeCheck(); trackLives(); } public function timeCheck(){ addEventListener(Event.ENTER_FRAME, showTime); } public function showTime(e:Event) { gameTime = getTimer()-startTime; rm1_mc.timeDisplay.text = clockTime(gameTime); rm1_mc.livesDisplay.text = String(heroLives); } public function clockTime(ms:int) { var seconds:int = Math.floor(ms/1000); var minutes:int = Math.floor(seconds/60); seconds -= minutes*60; var timeString:String = minutes+":"+String(seconds+100).substr(1,2); return timeString; } public function trackLives(){ for(var i:int=0; i<heroLives; i++){ life.graphics.lineStyle(1, 0xff9900); life.graphics.beginFill(0xff9900, 1); life.graphics.drawCircle(i*15, 45, 6); life.graphics.endFill(); addChild(life); } } function followMouse(e:MouseEvent){ mySprite.x=mouseX; mySprite.y=mouseY; trackCollisions(); } function trackCollisions(){ if(mySprite.hitTestObject(rm1_mc.obst1) || mySprite.hitTestObject(rm1_mc.obst2)){ hitObjects(); } else if(mySprite.hitTestObject(rm1_mc.target_mc)){ hitTarg(); } } function hitObjects(){ addChild(replacement); mySprite.x ^= replacement.x; replacement.x ^= mySprite.x; mySprite.x ^= replacement.x; mySprite.y ^= replacement.y; replacement.y ^= mySprite.y; mySprite.y ^= replacement.y; stage.removeEventListener(MouseEvent.MOUSE_MOVE, followMouse); removeChild(mySprite); hitObj = true; checkScore(); } function hitTarg(){ addChild(replacement); mySprite.x ^= replacement.x; replacement.x ^= mySprite.x; mySprite.x ^= replacement.x; mySprite.y ^= replacement.y; replacement.y ^= mySprite.y; mySprite.y ^= replacement.y; stage.removeEventListener(MouseEvent.MOUSE_MOVE, followMouse); removeEventListener(Event.ENTER_FRAME, showTime); removeChild(mySprite); hitTarget = true; currLevel++; checkScore(); } function checkScore(){ if(hitObj){ score += obstScore; heroLives--; removeChild(life); } else if(hitTarget){ score += targetScore; } rm1_mc.scoreDisplay.text = String(score); rm1_mc.livesDisplay.text = String(heroLives); trackLives(); } } }

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • ???Flashback Log???????Redo Log?

    - by Liu Maclean(???)
    ????????????????????redo log?   RVWR( Recovery Writer)?3s??flashback generate buffer??block before image?????????? ?????block change???RVWR??block before image ?flashback log? ?????????,Oracle???????????before image????????,????????flashback database logs?????   ???????????,????? ??????????????????,???????????before image?????shared pool??flashback log buffer?,RVWR??????flashback log buffer??????????? ?DBWR???????????????,DBWR?????buffer header??FBA(Flashback Byte Address)?flashback log buffer?????????? ???? ?????? ??? ????????????? , RVWR???????????(flashback markers)?flashback database logs?? ????(flashback markers)?????????????Oracle??flashback ??????????  ??????????, Oracle ??????(flashback markers)????????????flashback database log???????????block image; ??Oracle ???????(forward recovery)?????????????????SCN?????? flashback markers for example: **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8132 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 52) **** RECORD HEADER: Type: 7 (Begin Crash Recovery Record) Size: 36 RECORD DATA (Begin Crash Recovery Record): Previous logical record fba: (lno 1 thr 1 seq 1 bno 3 bof 316) Record scn: 0x0000.00000000 [0.0] **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 7868 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 316) **** RECORD HEADER: Type: 2 (Marker) Size: 300 RECORD DATA (Marker): Previous logical record fba: (lno 0 thr 0 seq 0 bno 0 bof 0) Record scn: 0x0000.00000000 [0.0] Marker scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:35 Flag 0x0 Flashback threads: 1, Enabled redo threads 1 Recovery Start Checkpoint: scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:12 thread:1 rba:(0x80.180.10) Flashback thread Markers: Thread:1 status:0 fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) Redo Thread Checkpoint Info: Thread:1 rba:(0x80.180.10) **** Record at fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8168 RECORD DATA (Skip): End-Of-Thread reached ????????????????block change ????before image????????flashback log?? ?????block change???flashback log record ????????? redo log???!????flashback log ???????before image ? redo log??? change vector ?  Oracle?????????????????????????????????????,??????I/O??????????????: ??hot block??,Oracle???????????block image?????; Oracle ?????????(flashback barriers)???????????????,flashback barriers???????(???15??),??????????(flashback barriers)????(flashback markers)????????? ????, ??????change?????, ???????????????????????????, ?15????????????????????flashback log????????before image?????????????,?????????????????????,?????????????? ????????,??????????????(flashback barriers), flashback barriers???????,?????15????? ?????flashback barriers????????(flashback markers)???????????????,???????????????????(????barriers?????)??????block image ,????????????????????????????????? ??????????flashback log????redo log????! ????,????????????????, ?????????? SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table flash_maclean (t1 varchar2(200)) tablespace users; Table created. SQL> insert into flash_maclean values('MACLEAN LOVE HANNA'); 1 row created. SQL> commit; Commit complete. SQL> startup force; ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2233960 bytes Variable Size 713034136 bytes Database Buffers 218103808 bytes Redo Buffers 6123520 bytes Database mounted. Database opened. SQL> update flash_maclean set t1='HANNA LOVE MACLEAN'; 1 row updated. commit; Commit complete. SQL> alter system checkpoint; System altered. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from flash_maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 140431 4 datafile 4 block 140431 ??RDBA rdba: 0x0102248f (4/140431) SQL> ! ps -ef|grep rvwr|grep -v grep oracle 26695 1 0 15:56 ? 00:00:00 ora_rvwr_G11R23 SQL> oradebug setospid 26695 Oracle pid: 20, Unix process pid: 26695, image: [email protected] (RVWR) SQL> ORADEBUG DUMP FBTAIL 1; Statement processed. To dump the last 2000 flashback records , ??ORADEBUG DUMP FBTAIL 1????????2000?????? SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/g11r23/G11R23/trace/G11R23_rvwr_26695.trc ? TRACE?????????block? before image **** Record at fba: (lno 1 thr 1 seq 1 bno 55 bof 2564) **** RECORD HEADER: Type: 1 (Block Image) Size: 28 RECORD DATA (Block Image): file#: 4 rdba: 0x0102248f Next scn: 0x0000.00000000 [0.0] Flag: 0x0 Block Size: 8192 BLOCK IMAGE: buffer rdba: 0x0102248f scn: 0x0000.00609044 seq: 0x01 flg: 0x06 tail: 0x90440601 frmt: 0x02 chkval: 0xc626 type: 0x06=trans data Hex dump of block: st=0, typ_found=1 Dump of memory from 0x00002B1D94183C00 to 0x00002B1D94185C00 2B1D94183C00 0000A206 0102248F 00609044 06010000 [.....$..D.`.....] 2B1D94183C10 0000C626 00000001 00014AD4 0060903A [&........J..:.`.] 2B1D94183C20 00000000 00320002 01022488 00090006 [......2..$......] 2B1D94183C30 00000CC8 00C00340 000D0542 00008000 [[email protected].......] 2B1D94183C40 006040BC 000F000A 00000920 00C002E4 [.@`..... .......] 2B1D94183C50 0017048F 00002001 00609044 00000000 [..... ..D.`.....] 2B1D94183C60 00000000 00010100 0014FFFF 1F6E1F77 [............w.n.] 2B1D94183C70 00001F6E 1F770001 00000000 00000000 [n.....w.........] 2B1D94183C80 00000000 00000000 00000000 00000000 [................] Repeat 500 times 2B1D94185BD0 00000000 00000000 2C000000 4D120102 [...........,...M] 2B1D94185BE0 454C4341 4C204E41 2045564F 4E4E4148 [ACLEAN LOVE HANN] 2B1D94185BF0 01002C41 43414D07 4E41454C 90440601 [A,...MACLEAN..D.] Block header dump: 0x0102248f Object id on Block? Y seg/obj: 0x14ad4 csc: 0x00.60903a itc: 2 flg: E typ: 1 - DATA brn: 0 bdba: 0x1022488 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0x0006.009.00000cc8 0x00c00340.0542.0d C--- 0 scn 0x0000.006040bc 0x02 0x000a.00f.00000920 0x00c002e4.048f.17 --U- 1 fsc 0x0000.00609044 bdba: 0x0102248f data_block_dump,data header at 0x2b1d94183c64 =============== tsiz: 0x1f98 hsiz: 0x14 pbl: 0x2b1d94183c64 76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f77 avsp=0x1f6e tosp=0x1f6e 0xe:pti[0] nrow=1 offs=0 0x12:pri[0] offs=0x1f77 block_row_dump: tab 0, row 0, @0x1f77 tl: 22 fb: --H-FL-- lb: 0x2 cc: 1 col 0: [18] 4d 41 43 4c 45 41 4e 20 4c 4f 56 45 20 48 41 4e 4e 41 end_of_block_dump SQL> select dump('MACLEAN LOVE HANNA',16) from dual; DUMP('MACLEANLOVEHANNA',16) -------------------------------------------------------------------- Typ=96 Len=18: 4d,41,43,4c,45,41,4e,20,4c,4f,56,45,20,48,41,4e,4e,41 ???????????????????????,??flashback log??before image????????? create table flash_maclean1 (t1 int) tablespace users; SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 0 redo size 0 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 49 flashback log write bytes 9306112 SQL> begin 2 for i in 1..5000 loop 3 update flash_maclean1 set t1=t1+1; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 20006 redo size 3071288 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 52 flashback log write bytes 10338304 ??????????? ??hot block,???20006 ?block changes???? ??? 3000k ?redo log ? ??1000k? flashback log ?

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • How to configure LocalSessionFactoryBean to release connections after transaction end?

    - by peter
    I am testing an application (Spring 2.5, Hibernate 3.5.0 Beta, Atomikos 3.6.2, and Postgreql 8.4.2) with the configuration for the DAO listed below. The problem that I see is that the pool of 10 connections with the dataSource gets exhausted after the 10's transaction. I know 'hibernate.connection.release_mode' has no effect unless the session is obtained with openSession rather then using a contextual session. I am wandering if anyone has found a way to configure the LocalSessionFactoryBean to release connections after any transaction. Thank you Peter <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close"> <property name="uniqueResourceName"><value>XADBMS</value></property> <property name="xaDataSourceClassName"> <value>org.postgresql.xa.PGXADataSource</value> </property> <property name="xaProperties"> <props> <prop key="databaseName">${jdbc.name}</prop> <prop key="serverName">${jdbc.server}</prop> <prop key="portNumber">${jdbc.port}</prop> <prop key="user">${jdbc.username}</prop> <prop key="password">${jdbc.password}</prop> </props> </property> <property name="poolSize"><value>10</value></property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="mappingResources"> <list> <value>Abc.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">on</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.connection.isolation">3</prop> <prop key="hibernate.current_session_context_class">jta</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.release_mode">auto</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> </props> </property> </bean> <!-- Transaction definition here --> <bean id="userTransactionService" class="com.atomikos.icatch.config.UserTransactionServiceImp" init-method="init" destroy-method="shutdownForce"> <constructor-arg> <props> <prop key="com.atomikos.icatch.service"> com.atomikos.icatch.standalone.UserTransactionServiceFactory </prop> </props> </constructor-arg> </bean> <!-- Construct Atomikos UserTransactionManager, needed to configure Spring --> <bean id="AtomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close" depends-on="userTransactionService"> <property name="forceShutdown" value="false" /> </bean> <!-- Also use Atomikos UserTransactionImp, needed to configure Spring --> <bean id="AtomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" depends-on="userTransactionService"> <property name="transactionTimeout" value="300" /> </bean> <!-- Configure the Spring framework to use JTA transactions from Atomikos --> <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" depends-on="userTransactionService"> <property name="transactionManager" ref="AtomikosTransactionManager" /> <property name="userTransaction" ref="AtomikosUserTransaction" /> </bean> <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) --> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" propagation="REQUIRED"/> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <aop:config> <aop:advisor pointcut="execution(* *.*.AbcDao.*(..))" advice-ref="txAdvice"/> </aop:config> <!-- DAO objects --> <bean id="abcDao" class="test.dao.impl.HibernateAbcDao" scope="singleton"> <property name="sessionFactory" ref="sessionFactory"/> </bean>

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • Scheduling runtime-specified Activity in Workflow 4 RC

    - by johnny g
    Hi, so I have this requirement to kick off Activities provided to me at run-time. To facilitate this, I have set up a WorkflowService that receives Activities as Xaml, hydrates them, and kicks them off. Sounds simple enough ... ... this is my WorkflowService in Xaml <Activity x:Class="Workflow.Services.WorkflowService.WorkflowService" ... xmlns:local1="clr-namespace:Workflow.Activities" > <Sequence sap:VirtualizedContainerService.HintSize="277,272"> <Sequence.Variables> <Variable x:TypeArguments="local:Workflow" Name="Workflow" /> </Sequence.Variables> <sap:WorkflowViewStateService.ViewState> <scg3:Dictionary x:TypeArguments="x:String, x:Object"> <x:Boolean x:Key="IsExpanded">True</x:Boolean> </scg3:Dictionary> </sap:WorkflowViewStateService.ViewState> <p:Receive CanCreateInstance="True" DisplayName="ReceiveSubmitWorkflow" sap:VirtualizedContainerService.HintSize="255,86" OperationName="SubmitWorkflow" ServiceContractName="IWorkflowService"> <p:ReceiveParametersContent> <OutArgument x:TypeArguments="local:Workflow" x:Key="workflow">[Workflow]</OutArgument> </p:ReceiveParametersContent> </p:Receive> <local1:InvokeActivity Activity="[ActivityXamlServices.Load(New System.IO.StringReader(Workflow.Xaml))]" sap:VirtualizedContainerService.HintSize="255,22" /> </Sequence> </Activity> ... which, except for repetitive use of "Workflow" is pretty straight forward. In fact, it's just a Sequence with a Receive and [currently] a custom Activity called InvokeActivity. Get to that in a bit. Receive Activity accepts a custom type, [DataContract] public class Workflow { [DataMember] public string Xaml { get; set; } } which contains a string whose contents are to be interpreted as Xaml. You can see the VB expression that then converts this Xaml to an Activity and passes it on. Now this second bit, the custom InvokeActivity is where I have questions. First question: 1) given an arbitrary task, provided at runtime [as described above] is it possible to kick off this Activity using Activities that ship with WF4RC, out of the box? I'm fairly new, and thought I did a good job going through the API and existing documentation, but may as well ask :) Second: 2) my first attempt at implementing a custom InvokeActivity looked like this public sealed class InvokeActivity : NativeActivity { private static readonly ILog _log = LogManager.GetLogger (typeof (InvokeActivity)); public InArgument<Activity> Activity { get; set; } public InvokeActivity () { _log.DebugFormat ("Instantiated."); } protected override void Execute (NativeActivityContext context) { Activity activity = Activity.Get (context); _log.DebugFormat ("Scheduling activity [{0}]...", activity.DisplayName); // throws exception to lack of metadata! :( ActivityInstance instance = context.ScheduleActivity (activity, OnComplete, OnFault); _log.DebugFormat ( "Scheduled activity [{0}] with instance id [{1}].", activity.DisplayName, instance.Id); } protected override void CacheMetadata (NativeActivityMetadata metadata) { // how does one add InArgument<T> to metadata? not easily // is my first guess base.CacheMetadata (metadata); } // private methods private void OnComplete ( NativeActivityContext context, ActivityInstance instance) { _log.DebugFormat ( "Scheduled activity [{0}] with instance id [{1}] has [{2}].", instance.Activity.DisplayName, instance.Id, instance.State); } private void OnFault ( NativeActivityFaultContext context, Exception exception, ActivityInstance instance) { _log.ErrorFormat ( @"Scheduled activity [{0}] with instance id [{1}] has faulted in state [{2}] {3}", instance.Activity.DisplayName, instance.Id, instance.State, exception.ToStringFullStackTrace ()); } } Which attempts to schedule the specified Activity within the current context. Unfortunately, however, this fails. When I attempt to schedule said Activity, the runtime returns with the following exception The provided activity was not part of this workflow definition when its metadata was being processed. The problematic activity named 'DynamicActivity' was provided by the activity named 'InvokeActivity'. Right, so the "dynamic" Activity provided at runtime is not a member of InvokeActivitys metadata. Googled and came across this. Couldn't sort out how to specify an InArgument<Activity> to metadata cache, so my second question is, naturally, how does one address this issue? Is it ill advised to use context.ScheduleActivity (...) in this manner? Third and final, 3) I have settled on this [simpler] solution for the time being, public sealed class InvokeActivity : NativeActivity { private static readonly ILog _log = LogManager.GetLogger (typeof (InvokeActivity)); public InArgument<Activity> Activity { get; set; } public InvokeActivity () { _log.DebugFormat ("Instantiated."); } protected override void Execute (NativeActivityContext context) { Activity activity = Activity.Get (context); _log.DebugFormat ("Invoking activity [{0}] ...", activity.DisplayName); // synchronous execution ... a little less than ideal, this // seems heavy handed, and not entirely semantic-equivalent // to what i want. i really want to invoke this runtime // activity as if it were one of my own, not a separate // process - wrong mentality? WorkflowInvoker.Invoke (activity); _log.DebugFormat ("Invoked activity [{0}].", activity.DisplayName); } } Which simply invokes specified task synchronously within its own runtime instance thingy [use of WF4 vernacular is certainly questionable]. Eventually, I would like to tap into WF's tracking and possibly persistance facilities. So my third and final question is, in terms of what I would like to do [ie kick off arbitrary workflows inbound from client applications] is this the preferred method? Alright, thanks in advance for your time and consideration :)

    Read the article

  • [android] MediaRecorder prepare() causes segfault

    - by dwilde1
    Folks, I have a situation where my MediaRecorder instance causes a segfault. I'm working with a HTC Hero, Android 1.5+APIs. I've tried all variations, including 3gpp and H.263 and reducing the video resolution to 320x240. What am I missing? The state machine causes 4 MediaPlayer beeps and then turns on the video camera. Here's the pertinent source: UPDATE: ADDING SURFACE CREATE INFO I have rebooted the device based on previous answer to similar question. UPDATE 2: I seem to be following the MediaRecorder state machine perfectly, and if I trap out the MR code, the blank surface displays perfectly and everything else functions perfectly. I can record videos manually and play back via MediaPlayer in my code, so there should be nothing wrong with the underlying code. I've copied sample code on the surface and surfaceHolder code. I've looked at the MR instance in the Debug perspective in Eclipse and see that all (known) variables seem to be instantiated correctly. The setter calls are all now implemented in the exaxct order specced in the state diagram. // in activity class definition protected MediaPlayer mPlayer; protected MediaRecorder mRecorder; protected boolean inCapture = false; protected int phaseCapture = 0; protected int durCapturePhase = INF; protected SurfaceView surface; protected SurfaceHolder surfaceHolder; // in onCreate() // panelPreview is an empty LinearLayout surface = new SurfaceView(getApplicationContext()); surfaceHolder = surface.getHolder(); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); panelPreview.addView(surface); // in timer handler runnable if (mRecorder == null) mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile(path + "/" + vlip); mRecorder.setVideoSize(320, 240); mRecorder.setVideoFrameRate(15); mRecorder.setPreviewDisplay(surfaceHolder.getSurface()); panelPreview.setVisibility(LinearLayout.VISIBLE); mRecorder.prepare(); mRecorder.start(); Here is a complete log trace for the process run and crash: I/ActivityManager( 80): Start proc com.ejf.convince.jenplus for activity com.ejf.convince.jenplus/.JenPLUS: pid=17738 uid=10075 gids={1006, 3003} I/jdwp (17738): received file descriptor 10 from ADB W/System.err(17738): Can't dispatch DDM chunk 46454154: no handler defined W/System.err(17738): Can't dispatch DDM chunk 4d505251: no handler defined I/WindowManager( 80): Screen status=true, current orientation=-1, SensorEnabled=false I/WindowManager( 80): needSensorRunningLp, mCurrentAppOrientation =-1 I/WindowManager( 80): Enabling listeners W/ActivityThread(17738): Application com.ejf.convince.jenplus is waiting for the debugger on port 8100... I/System.out(17738): Sending WAIT chunk I/dalvikvm(17738): Debugger is active I/AlertDialog( 80): [onCreate] auto launch SIP. I/WindowManager( 80): onOrientationChanged, rotation changed to 0 I/System.out(17738): Debugger has connected I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): debugger has settled (1370) I/ActivityManager( 80): Displayed activity com.ejf.convince.jenplus/.JenPLUS: 5186 ms I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/AudioHardwareMSM72XX( 2696): AUDIO_START: start kernel pcm_out driver. W/AudioFlinger( 2696): write blocked for 96 msecs I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 W/AuthorDriver( 2696): Intended width(640) exceeds the max allowed width(352). Max width is used instead. W/AuthorDriver( 2696): Intended height(480) exceeds the max allowed height(288). Max height is used instead. I/AudioHardwareMSM72XX( 2696): AudioHardware pcm playback is going to standby. I/DEBUG (16094): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG (16094): Build fingerprint: 'sprint/htc_heroc/heroc/heroc: 1.5/CUPCAKE/85027:user/release-keys' I/DEBUG (16094): pid: 17738, tid: 17738 com.ejf.convince.jenplus Thanks in advance! -- Don Wilde http://www.ConvinceProject.com

    Read the article

  • JAX-WS MarshalException with custom JAX-B bindings: Unable to marshal type "java.lang.String" as an

    - by MoneyMark
    I seem to be having an issue with Jax-WS and Jax-b playing nicely together. I need to consume a web-service, which has a predefined WSDL. When executing the generated client I am receiving the following error: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [com.sun.istack.SAXException2: unable to marshal type "java.lang.String" as an element because it is missing an @XmlRootElement annotation] This started occurring when I used an external custom binding file to map needlessly complex types to java.lang.string. Here is an excerpt from my binding file: <?xml version="1.0" encoding="UTF-8"?> <bindings xmlns="http://java.sun.com/xml/ns/jaxb" version="2.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"> <bindings schemaLocation="http://localhost:7777/GESOR/services/RegistryUpdatePort?wsdl#types?schema1" node="/xs:schema"> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='company_name']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address1']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address2']"> <property> <baseType name="java.lang.String" /> </property> </bindings> ...more fields </bindings> </bindings> When executing wsimport against the provided WSDL, StwrdCompany is generated with the following variables declared: @XmlRootElement(name = "StwrdCompany") public class StwrdCompany { @XmlElementRef(name = "company_name", type = JAXBElement.class) protected String companyName; @XmlElementRef(name = "address1", type = JAXBElement.class) protected String address1; @XmlElementRef(name = "address2", type = JAXBElement.class) ... more fields ... getters/setters @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "value" }) public static class CompanyName { @XmlValue protected String value; @XmlAttribute protected Boolean updateToNULL; /** * Gets the value of the value property. * * @return * possible object is * {@link String } * */ public String getValue() { return value; } /** * Sets the value of the value property. * * @param value * allowed object is * {@link String } * */ public void setValue(String value) { this.value = value; } /** * Gets the value of the updateToNULL property. * * @return * possible object is * {@link Boolean } * */ public boolean isUpdateToNULL() { if (updateToNULL == null) { return false; } else { return updateToNULL; } } /** * Sets the value of the updateToNULL property. * * @param value * allowed object is * {@link Boolean } * */ public void setUpdateToNULL(Boolean value) { this.updateToNULL = value; } ... more inner classes } } Finally, here is the associated snippet from the WSDL that seems to be causing such grief. <xs:element name="StwrdCompany"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="company_name" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element maxOccurs="1" minOccurs="0" name="address1" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> ... more fields in the same format <xs:element maxOccurs="1" minOccurs="0" name="p_source_timestamp" nillable="false" type="xs:string"/> </xs:sequence> <xs:attribute name="company_xid" type="xs:string"/> </xs:complexType> </xs:element> The reason for the custom binding is so I can map user input from a pojo into the StwrdCompany object more easily, whether it be direct instantiation or through the use of Dozer for bean mapping. I was unable to successfully map between the objects without the custom binding. Finally, one other thing I tried was setting a globalBinding definition: <globalBindings generateValueClass="false"></globalBindings> This caused the server to through an argument mismatch exception since the Soap Message was using xs:string xml types instead of passing the defined complex types, so I abandoned that idea. Any insight into what is causing the MarshalException or how to go about solving the issue of calling the webservice and mapping these objects more easily, is greatly appreciated. I've been searching for days and I sadly think I am stumped.

    Read the article

  • asp.net C# Webcam api error

    - by Eyla
    Greeting, I'm tring to use webcam api with asp.net and C#. I included all the library and reverince I needed for that. the original code I'm use was for windows application and I'm trying to convert it to asp.net web application. I have start capturing button when I click it, it should start capturing but it gives me an error. the error at this line: hHwnd = capCreateCaptureWindowA(iDevice.ToString(), (WS_VISIBLE | WS_CHILD), 0, 0, 640, 480, picCapture.Handle.ToInt32(), 0); and the error message is: Error 1 'System.Web.UI.WebControls.Image' does not contain a definition for 'Handle' and no extension method 'Handle' accepting a first argument of type 'System.Web.UI.WebControls.Image' could be found (are you missing a using directive or an assembly reference?) C:\Users\Ali\Documents\Visual Studio 2008\Projects\Conference\Conference\Conference1.aspx.cs 63 117 Conference Please advice!! ................................................ here is the complete code ........................................... using System; using System.Collections; using System.Drawing; using System.ComponentModel; using System.Windows.Forms; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Runtime.InteropServices; using System.Drawing.Imaging; using System.Net; using System.Net.Sockets; using System.Threading; using System.IO; namespace Conference { public partial class Conference1 : System.Web.UI.Page { #region WebCam API const short WM_CAP = 1024; const int WM_CAP_DRIVER_CONNECT = WM_CAP + 10; const int WM_CAP_DRIVER_DISCONNECT = WM_CAP + 11; const int WM_CAP_EDIT_COPY = WM_CAP + 30; const int WM_CAP_SET_PREVIEW = WM_CAP + 50; const int WM_CAP_SET_PREVIEWRATE = WM_CAP + 52; const int WM_CAP_SET_SCALE = WM_CAP + 53; const int WS_CHILD = 1073741824; const int WS_VISIBLE = 268435456; const short SWP_NOMOVE = 2; const short SWP_NOSIZE = 1; const short SWP_NOZORDER = 4; const short HWND_BOTTOM = 1; int iDevice = 0; int hHwnd; [System.Runtime.InteropServices.DllImport("user32", EntryPoint = "SendMessageA")] static extern int SendMessage(int hwnd, int wMsg, int wParam, [MarshalAs(UnmanagedType.AsAny)] object lParam); [System.Runtime.InteropServices.DllImport("user32", EntryPoint = "SetWindowPos")] static extern int SetWindowPos(int hwnd, int hWndInsertAfter, int x, int y, int cx, int cy, int wFlags); [System.Runtime.InteropServices.DllImport("user32")] static extern bool DestroyWindow(int hndw); [System.Runtime.InteropServices.DllImport("avicap32.dll")] static extern int capCreateCaptureWindowA(string lpszWindowName, int dwStyle, int x, int y, int nWidth, short nHeight, int hWndParent, int nID); [System.Runtime.InteropServices.DllImport("avicap32.dll")] static extern bool capGetDriverDescriptionA(short wDriver, string lpszName, int cbName, string lpszVer, int cbVer); private void OpenPreviewWindow() { int iHeight = 320; int iWidth = 200; // // Open Preview window in picturebox // hHwnd = capCreateCaptureWindowA(iDevice.ToString(), (WS_VISIBLE | WS_CHILD), 0, 0, 640, 480, picCapture.Handle.ToInt32(), 0); // // Connect to device // if (SendMessage(hHwnd, WM_CAP_DRIVER_CONNECT, iDevice, 0) == 1) { // // Set the preview scale // SendMessage(hHwnd, WM_CAP_SET_SCALE, 1, 0); // // Set the preview rate in milliseconds // SendMessage(hHwnd, WM_CAP_SET_PREVIEWRATE, 66, 0); // // Start previewing the image from the camera // SendMessage(hHwnd, WM_CAP_SET_PREVIEW, 1, 0); // // Resize window to fit in picturebox // SetWindowPos(hHwnd, HWND_BOTTOM, 0, 0, iWidth, iHeight, (SWP_NOMOVE | SWP_NOZORDER)); } else { // // Error connecting to device close window // DestroyWindow(hHwnd); } } private void ClosePreviewWindow() { // // Disconnect from device // SendMessage(hHwnd, WM_CAP_DRIVER_DISCONNECT, iDevice, 0); // // close window // DestroyWindow(hHwnd); } #endregion protected void Page_Load(object sender, EventArgs e) { } protected void btnStart_Click(object sender, EventArgs e) { int iDevice = int.Parse(device_number_textBox.Text); OpenPreviewWindow(); } } }

    Read the article

  • Sharepoint : Access denied when editing a page (because of page layout) or list item

    - by tinky05
    I'm logged in as the System Account, so it's probably not a "real access denied"! What I've done : - A custom master page - A custom page layout from a custom content type (with custom fields) If I add a custom field (aka "content field" in the tools in SPD) in my page layout, I get an access denied when I try to edit a page that comes from that page layout. So, for example, if I add in my page layout this line in a "asp:content" tag : I get an access denied. If I remove it, everyting is fine. (the field "test" is a field that comes from the content type). Any idea? UPDATE Well, I tried in a blank site and it worked fine, so there must be something wrong with my web application :( UPDATE #2 Looks like this line in the master page gives me the access denied : <SharePoint:DelegateControl runat="server" ControlId="PublishingConsole" Visible="false" PrefixHtml="&lt;tr&gt;&lt;td colspan=&quot;0&quot; id=&quot;mpdmconsole&quot; class=&quot;s2i-consolemptablerow&quot;&gt;" SuffixHtml="&lt;/td&gt;&lt;/tr&gt;"></SharePoint:DelegateControl> UPDATE #3 I Found http://odole.wordpress.com/2009/01/30/access-denied-error-message-while-editing-properties-of-any-document-in-a-moss-document-library/ Looks like a similar issue. But our Sharepoint versions are with the latest updates. I'll try to use the code that's supposed to fix the lists and post another update. ** UPDATE #4** OK... I tried the code that I found on the page above (see link) and it seems to fix the thing. I haven't tested the solution at 100% but so far, so good. Here's the code I made for a feature receiver (I used the code posted from the link above) : using System; using System.Collections.Generic; using System.Text; using Microsoft.SharePoint; using System.Xml; namespace MyWebsite.FixAccessDenied { class FixAccessDenied : SPFeatureReceiver { public override void FeatureActivated(SPFeatureReceiverProperties properties) { FixWebField(SPContext.Current.Web); } public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } public override void FeatureInstalled(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } public override void FeatureUninstalling(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } static void FixWebField(SPWeb currentWeb) { string RenderXMLPattenAttribute = "RenderXMLUsingPattern"; SPSite site = new SPSite(currentWeb.Url); SPWeb web = site.OpenWeb(); web.AllowUnsafeUpdates = true; web.Update(); SPField f = web.Fields.GetFieldByInternalName("PermMask"); string s = f.SchemaXml; Console.WriteLine("schemaXml before: " + s); XmlDocument xd = new XmlDocument(); xd.LoadXml(s); XmlElement xe = xd.DocumentElement; if (xe.Attributes[RenderXMLPattenAttribute] == null) { XmlAttribute attr = xd.CreateAttribute(RenderXMLPattenAttribute); attr.Value = "TRUE"; xe.Attributes.Append(attr); } string strXml = xe.OuterXml; Console.WriteLine("schemaXml after: " + strXml); f.SchemaXml = strXml; foreach (SPWeb sites in site.AllWebs) { FixField(sites.Url); } } static void FixField(string weburl) { string RenderXMLPattenAttribute = "RenderXMLUsingPattern"; SPSite site = new SPSite(weburl); SPWeb web = site.OpenWeb(); web.AllowUnsafeUpdates = true; web.Update(); System.Collections.Generic.IList<Guid> guidArrayList = new System.Collections.Generic.List<Guid>(); foreach (SPList list in web.Lists) { guidArrayList.Add(list.ID); } foreach (Guid guid in guidArrayList) { SPList list = web.Lists[guid]; SPField f = list.Fields.GetFieldByInternalName("PermMask"); string s = f.SchemaXml; Console.WriteLine("schemaXml before: " + s); XmlDocument xd = new XmlDocument(); xd.LoadXml(s); XmlElement xe = xd.DocumentElement; if (xe.Attributes[RenderXMLPattenAttribute] == null) { XmlAttribute attr = xd.CreateAttribute(RenderXMLPattenAttribute); attr.Value = "TRUE"; xe.Attributes.Append(attr); } string strXml = xe.OuterXml; Console.WriteLine("schemaXml after: " + strXml); f.SchemaXml = strXml; } } } } Just put that code as a Feature Receiver, and activate it at the root site, it should loop trough all the subsites and fix the lists. SUMMARY You get an ACCESS DENIED when editing a PAGE or an ITEM You still get the error even if you're logged in as the Super Admin of the f****in world (sorry, I spent 3 days on that bug) For me, it happened after an import from another site definition (a cmp file) Actually, it's supposed to be a known bug and it's supposed to be fixed since February 2009, but it looks like it's not. The code I posted above should fix the thing.

    Read the article

  • MediaRecorder prepare() causes segfault

    - by dwilde1
    Folks, I have a situation where my MediaRecorder instance causes a segfault. I'm working with a HTC Hero, Android 1.5+APIs. I've tried all variations, including 3gpp and H.263 and reducing the video resolution to 320x240. What am I missing? The state machine causes 4 MediaPlayer beeps and then turns on the video camera. Here's the pertinent source: UPDATE: ADDING SURFACE CREATE INFO I have rebooted the device based on previous answer to similar question. UPDATE 2: I seem to be following the MediaRecorder state machine perfectly, and if I trap out the MR code, the blank surface displays perfectly and everything else functions perfectly. I can record videos manually and play back via MediaPlayer in my code, so there should be nothing wrong with the underlying code. I've copied sample code on the surface and surfaceHolder code. I've looked at the MR instance in the Debug perspective in Eclipse and see that all (known) variables seem to be instantiated correctly. The setter calls are all now implemented in the exaxct order specced in the state diagram. UPDATE 3: I've tried all permission combinations: CAMERA + RECORD_AUDIO+RECORD_VIDEO, CAMERA only, RECORD_AUDIO+RECORD_VIDEO This is driving me bats! :))) // in activity class definition protected MediaPlayer mPlayer; protected MediaRecorder mRecorder; protected boolean inCapture = false; protected int phaseCapture = 0; protected int durCapturePhase = INF; protected SurfaceView surface; protected SurfaceHolder surfaceHolder; // in onCreate() // panelPreview is an empty LinearLayout surface = new SurfaceView(getApplicationContext()); surfaceHolder = surface.getHolder(); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); panelPreview.addView(surface); // in timer handler runnable if (mRecorder == null) mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile(path + "/" + vlip); mRecorder.setVideoSize(320, 240); mRecorder.setVideoFrameRate(15); mRecorder.setPreviewDisplay(surfaceHolder.getSurface()); panelPreview.setVisibility(LinearLayout.VISIBLE); mRecorder.prepare(); mRecorder.start(); Here is a complete log trace for the process run and crash: I/ActivityManager( 80): Start proc com.ejf.convince.jenplus for activity com.ejf.convince.jenplus/.JenPLUS: pid=17738 uid=10075 gids={1006, 3003} I/jdwp (17738): received file descriptor 10 from ADB W/System.err(17738): Can't dispatch DDM chunk 46454154: no handler defined W/System.err(17738): Can't dispatch DDM chunk 4d505251: no handler defined I/WindowManager( 80): Screen status=true, current orientation=-1, SensorEnabled=false I/WindowManager( 80): needSensorRunningLp, mCurrentAppOrientation =-1 I/WindowManager( 80): Enabling listeners W/ActivityThread(17738): Application com.ejf.convince.jenplus is waiting for the debugger on port 8100... I/System.out(17738): Sending WAIT chunk I/dalvikvm(17738): Debugger is active I/AlertDialog( 80): [onCreate] auto launch SIP. I/WindowManager( 80): onOrientationChanged, rotation changed to 0 I/System.out(17738): Debugger has connected I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): debugger has settled (1370) I/ActivityManager( 80): Displayed activity com.ejf.convince.jenplus/.JenPLUS: 5186 ms I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/AudioHardwareMSM72XX( 2696): AUDIO_START: start kernel pcm_out driver. W/AudioFlinger( 2696): write blocked for 96 msecs I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 W/AuthorDriver( 2696): Intended width(640) exceeds the max allowed width(352). Max width is used instead. W/AuthorDriver( 2696): Intended height(480) exceeds the max allowed height(288). Max height is used instead. I/AudioHardwareMSM72XX( 2696): AudioHardware pcm playback is going to standby. I/DEBUG (16094): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG (16094): Build fingerprint: 'sprint/htc_heroc/heroc/heroc: 1.5/CUPCAKE/85027:user/release-keys' I/DEBUG (16094): pid: 17738, tid: 17738 com.ejf.convince.jenplus Thanks in advance! -- Don Wilde http://www.ConvinceProject.com

    Read the article

  • NameNotFoundException when calling a EJB in Weblogic 10.3

    - by XpiritO
    First of all, I'd like to underline that I've already read other posts in StackOverflow (example) with similar questions, but unfortunately I didn't manage to solve this problem with the answers I saw on those posts. I have no intention to repost a question that has already been answered, so if that's the case, I apologize and I'd be thankful to whom points out where the solution is posted. Here is my question: I'm trying to deploy an EJB in WebLogic 10.3.2. The purpose is to use a specific WorkManager to execute work produced in the scope of this component. With this in mind, I've set up a WorkManager (named ResponseTimeReqClass-0) on my WebLogic configuration, using the web-based interface (Environment Work Managers New). Here is a screenshot: Here is my session bean definition and descriptors: OrquestratorRemote.java package orquestrator; import javax.ejb.Remote; @Remote public interface OrquestratorRemote { public void initOrquestrator(); } OrquestratorBean.java package orquestrator; import javax.ejb.Stateless; import com.siemens.ecustoms.orchestration.eCustomsOrchestrator; @Stateless(name = "OrquestratorBean", mappedName = "OrquestratorBean") public class OrquestratorBean implements OrquestratorRemote { public void initOrquestrator(){ eCustomsOrchestrator orquestrator = new eCustomsOrchestrator(); orquestrator.run(); } } META-INF\ejb-jar.xml <?xml version='1.0' encoding='UTF-8'?> <ejb-jar xmlns='http://java.sun.com/xml/ns/javaee' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' metadata-complete='true'> <enterprise-beans> <session> <ejb-name>OrquestradorEJB</ejb-name> <mapped-name>OrquestratorBean</mapped-name> <business-remote>orquestrator.OrquestratorRemote</business-remote> <ejb-class>orquestrator.OrquestratorBean</ejb-class> <session-type>Stateless</session-type> <transaction-type>Container</transaction-type> </session> </enterprise-beans> <assembly-descriptor></assembly-descriptor> </ejb-jar> META-INF\weblogic-ejb-jar.xml (I've placed work manager configuration in this file, as I've seen on a tutorial on the internet) <weblogic-ejb-jar xmlns="http://www.bea.com/ns/weblogic/90" xmlns:j2ee="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/90 http://www.bea.com/ns/weblogic/90/weblogic-ejb-jar.xsd"> <weblogic-enterprise-bean> <ejb-name>OrquestratorBean</ejb-name> <jndi-name>OrquestratorBean</jndi-name> <dispatch-policy>ResponseTimeReqClass-0</dispatch-policy> </weblogic-enterprise-bean> </weblogic-ejb-jar> I've compiled this into a JAR and deployed it on WebLogic, as a library shared by administrative server and all cluster nodes on my solution (it's in "Active" state). As I've seen in several tutorials and examples, I'm using this code on my application, in order to call the bean: InitialContext ic = null; try { Hashtable<String,String> env = new Hashtable<String,String>(); env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory"); env.put(Context.PROVIDER_URL, "t3://localhost:7001"); ic = new InitialContext(env); } catch(Exception e) { System.out.println("\n\t Didn't get InitialContext: "+e); } // try { Object obj = ic.lookup("OrquestratorBean"); OrquestratorRemote remote =(OrquestratorRemote)obj; System.out.println("\n\n\t++ Remote => "+ remote.getClass()); System.out.println("\n\n\t++ initOrquestrator()"); remote.initOrquestrator(); } catch(Exception e) { System.out.println("\n\n\t WorkManager Exception => "+ e); e.printStackTrace(); } Unfortunately, this don't work. It throws an exception on runtime, as follows: WorkManager Exception = javax.naming.NameNotFoundException: Unable to resolve 'OrquestratorBean'. Resolved '' [Root exception is javax.naming.NameNotFoundException: Unable to resolve 'OrquestratorBean'. Resolved '']; remaining name 'OrquestratorBean' After seeing this, I've even tried changing this line Object obj = ic.lookup("OrquestratorBean"); to this: Object obj = ic.lookup("OrquestratorBean#orquestrator.OrquestratorBean"); but the result was the same runtime exception. Can anyone please help me detecting what am I doing wrong here? I'm having a bad time debugging this, as I don't know how to check out what may be causing this issue... Thanks in advance for your patience and help.

    Read the article

  • SecurityException when accessing (ejb2-) session bean via local interface in JBoss 5

    - by sme
    I have the following problem with an EJB 2 SessionBean when deploying in JBoss 5: The SessionBean (called LVSKeepAliveDispatcher) requires a specific user role (called "LVSUser"), specified by <method-permission > <description></description> <role-name>LVSUser</role-name> <method > <description></description> <ejb-name>LVSKeepAliveDispatcher</ejb-name> <method-name>*</method-name> </method> </method-permission> in ejb-jar.xml. I now want to access this SessionBean from a Service (i.e. a class implementing the org.jboss.varia.scheduler.Schedulable interface that is then registered as a service) running inside the same JBoss instance. This is my jboss-service.xml: <server> <mbean code="org.jboss.varia.scheduler.Scheduler" name="lvs:service=TranslationService"> <attribute name="StartAtStartup">true</attribute> <attribute name="SchedulableClass">de.repower.lvs.server.service.translation.TranslationService</attribute> <attribute name="SchedulableArguments"></attribute> <attribute name="SchedulableArgumentTypes"></attribute> <attribute name="InitialStartDate">NOW</attribute> <attribute name="SchedulePeriod">60000</attribute> <attribute name="InitialRepetitions">1</attribute> <attribute name="TimerName">jboss:service=Timer,name=TranslationServiceTimer</attribute> <depends><mbean code="javax.management.timer.Timer" name="jboss:service=Timer,name=TranslationServiceTimer"/></depends> <depends>jboss.j2ee:service=EJB,jndiName=de/repower/lvs/i18n/sessionbeans/LVSTranslation</depends> </mbean> As the service is deployed in the same vm as the session bean I want to call the session bean via the local interface, but I get a SecurityException when I try to create an instance. When instead I do a lookup of the RemoteInterface it works. This is the code inside the perform method of my service class: public void perform(Date now, long remainingRepetitions) { try { final UsernamePasswordHandler handler = new UsernamePasswordHandler(USERNAME, PASSWORD); final LoginContext lc = new LoginContext("client-login", handler); lc.login(); // Trying to instantiate an LVSKeepAliveDispatcher via remote interface // This part works LVSKeepAliveDispatcher localvHome = LVSKeepAliveDispatcherUtil.getHome().create(); LOGGER.info("Successfully instantiated an LVSKeepAliveDispatcher " + localvHome.toString()); // Trying to instantiate an LVSKeepAliveDispatcherLocal via local interface LVSKeepAliveDispatcherLocal localvLocalHome = LVSKeepAliveDispatcherUtil.getLocalHome().create(); // this code is unforunately never reached LOGGER.info("Successfully instantiated an LVSKeepAliveDispatcherLocal " + localvLocalHome.toString()); lc.logout(); } catch (final Exception ex) { LOGGER.error("Error: ", ex); } } Exception: 2009-02-17 10:38:02,266 INFO [lvsi18n] (Timer-2) Successfully instantiated an LVSKeepAliveDispatcher de/repower/lvs/server/service/alive/sessionbeans/LVSKeepAliveDispatcher:Stateless 2009-02-17 10:38:02,297 ERROR [org.jboss.ejb.plugins.SecurityInterceptor] (Timer-2) Error in Security Interceptor java.lang.SecurityException: Authentication exception, principal=internalSystemUser at org.jboss.ejb.plugins.SecurityInterceptor.checkSecurityContext(SecurityInterceptor.java:321) at org.jboss.ejb.plugins.SecurityInterceptor.process(SecurityInterceptor.java:243) at org.jboss.ejb.plugins.SecurityInterceptor.invokeHome(SecurityInterceptor.java:205) at org.jboss.ejb.plugins.security.PreSecurityInterceptor.process(PreSecurityInterceptor.java:136) at org.jboss.ejb.plugins.security.PreSecurityInterceptor.invokeHome(PreSecurityInterceptor.java:88) at org.jboss.ejb.plugins.LogInterceptor.invokeHome(LogInterceptor.java:132) at org.jboss.ejb.plugins.ProxyFactoryFinderInterceptor.invokeHome(ProxyFactoryFinderInterceptor.java:107) at org.jboss.ejb.SessionContainer.internalInvokeHome(SessionContainer.java:639) at org.jboss.ejb.Container.invoke(Container.java:1046) at org.jboss.ejb.plugins.local.BaseLocalProxyFactory.invokeHome(BaseLocalProxyFactory.java:362) at org.jboss.ejb.plugins.local.LocalHomeProxy.invoke(LocalHomeProxy.java:133) at $Proxy193.create(Unknown Source) at de.repower.lvs.server.service.translation.TranslationService.perform(TranslationService.java:68) at org.jboss.varia.scheduler.Scheduler$PojoScheduler.invoke(Scheduler.java:1267) at org.jboss.varia.scheduler.Scheduler$BaseListener.handleNotification(Scheduler.java:1235) at sun.reflect.GeneratedMethodAccessor281.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.notification.NotificationListenerProxy.invoke(NotificationListenerProxy.java:153) at $Proxy87.handleNotification(Unknown Source) at javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:257) at javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:322) at javax.management.NotificationBroadcasterSupport$1.execute(NotificationBroadcasterSupport.java:307) at javax.management.NotificationBroadcasterSupport.sendNotification(NotificationBroadcasterSupport.java:229) at javax.management.timer.Timer.sendNotification(Timer.java:1234) at javax.management.timer.Timer.notifyAlarmClock(Timer.java:1203) at javax.management.timer.TimerAlarmClock.run(Timer.java:1286) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) To further diagnose the error I debugged through the SecurityInterceptor and found that in the first case (successful creating an instance via the remote interface) the security context "lvs-security" (which I defined in login-config.xml) is being used whereas in the second case (failure when creating an instance via the local interface) the generic security context "CLIENT-LOGIN" is being used. This is the definition of the securit context "lvs-security" in login-config.xml: <application-policy name = "lvs-security"> <authentication> <login-module code = "org.jboss.security.ClientLoginModule" flag = "required"> </login-module> <login-module code = "de.repower.lvs.security.UsersRolesLoginModule" flag = "sufficient"> </login-module> <login-module code = "de.repower.lvs.security.login.LVSLoginModule" flag = "required"> <module-option name = "lvs-jboss-host">localhost</module-option> <module-option name = "lvs-jboss-jndi-port">1099</module-option> </login-module> </authentication> </application-policy> I'm now kind of stuck and hope someone can give me a hint about where to further look for the cause of the problem. This worked fine in JBoss 3.2.7. Edit: My current workaround for this problem: create a new container configuration in jboss.xml and remove the security interceptor stuff from this configuration use this newly created container configuration for all my session beans that I only use locally (i.e. via local interface).

    Read the article

  • Does RVM "failover" to another ruby instance on error?

    - by JohnMetta
    Have a strange problem in that I have a Rake task that seems to be using multiple versions of Ruby. When one fails, it seems to try another one. Details MacBook running 10.6.5 rvm 1.1.0 Rubies: 1.8.7-p302, ree-1.8.7-2010.02, ruby-1.9.2-p0 Rake 0.8.7 Gem 1.3.7 Veewee (provisioning Virtual Machines using Opcode.com, Vagrant and Chef) I'm not entirely sure the specific details of the error matter, but since it might be an issue with Veewee itself. So, what I'm trying to do is build a new box base on a veewee definition. The command fails with an error about a missing method- but what's interesting is how it fails. Errors I managed to figure out that if I only have one Ruby installed with RVM, it just fails. If I have more than one Ruby install, it fails at the same place, but execution seems to continue in another interpreter. Here are two different clipped console outputs. I've clipped them for size. The full outputs of each error are available as a gist. One Ruby version installed Here is the command run when I only have a single version of Ruby (1.8.7) available in RVM boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x102d6af80> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' <------ stacktraces cut ----------> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 Multiple Ruby Versions Here is the same command run with three versions of Ruby available in RVM. Prior to doing this, I used "rvm use 1.8.7." Again, I don't know how important the details of the specific errors are- what's interesting to me is that there are three separate errors- each with it's own stacktrace- and each in a different Ruby interpreter. Look at the bottom of each stacktrace and you'll see that they are all sourced from different interpreter locations- First ree-1.8.7, then ruby-1.8.7, then ruby-1.9.2: boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x1059dd608> /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' … /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a1b857f92eecaf9f0a31ecfc39dee906", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 destroying machine+disks (re-)executing step 0-initial-a1b857f92eecaf9f0a31ecfc39dee906 VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87 /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:631:in `each' … /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a9c4ab3257e1da3479c984eae9905c2a", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 (re-)executing step 0-initial-a9c4ab3257e1da3479c984eae9905c2a VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `block in build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87:in `block in <top (required)>' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `call' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `block in execute' … /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:2013:in `top_level' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:1992:in `run' /Users/john/.rvm/rubies/ruby-1.9.2-p0/bin/rake:35:in `<main>' It isn't until we reach the last installed version of Ruby that execution halts. Discussion Does anyone have any idea what's going on here? Has anyone seen this "failover"-like behavior before? It seems strange to me that the first exception would not halt execution as it did with one interpreter, but I wonder if there are things happening when RVM is installed that we Ruby developers are not considering.

    Read the article

  • Mixing inheritance mapping strategies in NHibernate

    - by MylesRip
    I have a rather large inheritance hierarchy in which some of the subclasses add very little and others add quite a bit. I don't want to map the entire hierarchy using either "table per class hierarchy" or "table per subclass" due to the size and complexity of the hierarchy. Ideally I'd like to mix mapping strategies such that portions of the hierarchy where the subclasses add very little are combined into a common table a la "table per class hierarchy" and subclasses that add a lot are broken out into a separate table. Using this approach, I would expect to have 2 or 3 tables with very little wasted space instead of either 1 table with lots of fields that don't apply to most of the objects, or 20+ tables, several of which would have only a couple of columns. In the NHibernate Reference Documentation version 2.1.0, I found section 8.1.4 "Mixing table per class hierarchy with table per subclass". This approach switches strategies partway down the hierarchy by using: ... <subclass ...> <join ...> <property ...> ... </join> </subclass> ... This is great in theory. In practice, though, I found that the schema was too restrictive in what was allowed inside the "join" element for me to be able to accomplish what I needed. Here is the related part of the schema definition: <xs:element name="join"> <xs:complexType> <xs:sequence> <xs:element ref="subselect" minOccurs="0" /> <xs:element ref="comment" minOccurs="0" /> <xs:element ref="key" /> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="property" /> <xs:element ref="many-to-one" /> <xs:element ref="component" /> <xs:element ref="dynamic-component" /> <xs:element ref="any" /> <xs:element ref="map" /> <xs:element ref="set" /> <xs:element ref="list" /> <xs:element ref="bag" /> <xs:element ref="idbag" /> <xs:element ref="array" /> <xs:element ref="primitive-array" /> </xs:choice> <xs:element ref="sql-insert" minOccurs="0" /> <xs:element ref="sql-update" minOccurs="0" /> <xs:element ref="sql-delete" minOccurs="0" /> </xs:sequence> <xs:attribute name="table" use="required" type="xs:string" /> <xs:attribute name="schema" type="xs:string" /> <xs:attribute name="catalog" type="xs:string" /> <xs:attribute name="subselect" type="xs:string" /> <xs:attribute name="fetch" default="join"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="join" /> <xs:enumeration value="select" /> </xs:restriction> </xs:simpleType> </xs:attribute> <xs:attribute name="inverse" default="false" type="xs:boolean"> </xs:attribute> <xs:attribute name="optional" default="false" type="xs:boolean"> </xs:attribute> </xs:complexType> </xs:element> As you can see, this allows the use of "property" child elements or "component" child elements, but not both. It also doesn't allow for "subclass" child elements to continue the hierarchy below the point at which the strategy was changed. Is there a way to accomplish this?

    Read the article

  • Webcam api error when accessed from ASP.NET Server-side code

    - by Eyla
    I'm tring to use webcam api with asp.net and C#. I included all the library and references I needed for that. the original code I'm use was for windows application and I'm trying to convert it to asp.net web application. I have start capturing button when I click it, it should start capturing but it gives me an error. the error at this line: hHwnd = capCreateCaptureWindowA(iDevice.ToString(), (WS_VISIBLE | WS_CHILD), 0, 0, 640, 480, picCapture.Handle.ToInt32(), 0); and the error message is: Error 1 'System.Web.UI.WebControls.Image' does not contain a definition for 'Handle' and no extension method 'Handle' accepting a first argument of type 'System.Web.UI.WebControls.Image' could be found (are you missing a using directive or an assembly reference?) C:\Users\Ali\Documents\Visual Studio 2008\Projects\Conference\Conference\Conference1.aspx.cs 63 117 Conference Please advice!! ................................................ here is the complete code ........................................... using System; using System.Collections; using System.Drawing; using System.ComponentModel; using System.Windows.Forms; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Runtime.InteropServices; using System.Drawing.Imaging; using System.Net; using System.Net.Sockets; using System.Threading; using System.IO; namespace Conference { public partial class Conference1 : System.Web.UI.Page { #region WebCam API const short WM_CAP = 1024; const int WM_CAP_DRIVER_CONNECT = WM_CAP + 10; const int WM_CAP_DRIVER_DISCONNECT = WM_CAP + 11; const int WM_CAP_EDIT_COPY = WM_CAP + 30; const int WM_CAP_SET_PREVIEW = WM_CAP + 50; const int WM_CAP_SET_PREVIEWRATE = WM_CAP + 52; const int WM_CAP_SET_SCALE = WM_CAP + 53; const int WS_CHILD = 1073741824; const int WS_VISIBLE = 268435456; const short SWP_NOMOVE = 2; const short SWP_NOSIZE = 1; const short SWP_NOZORDER = 4; const short HWND_BOTTOM = 1; int iDevice = 0; int hHwnd; [System.Runtime.InteropServices.DllImport("user32", EntryPoint = "SendMessageA")] static extern int SendMessage(int hwnd, int wMsg, int wParam, [MarshalAs(UnmanagedType.AsAny)] object lParam); [System.Runtime.InteropServices.DllImport("user32", EntryPoint = "SetWindowPos")] static extern int SetWindowPos(int hwnd, int hWndInsertAfter, int x, int y, int cx, int cy, int wFlags); [System.Runtime.InteropServices.DllImport("user32")] static extern bool DestroyWindow(int hndw); [System.Runtime.InteropServices.DllImport("avicap32.dll")] static extern int capCreateCaptureWindowA(string lpszWindowName, int dwStyle, int x, int y, int nWidth, short nHeight, int hWndParent, int nID); [System.Runtime.InteropServices.DllImport("avicap32.dll")] static extern bool capGetDriverDescriptionA(short wDriver, string lpszName, int cbName, string lpszVer, int cbVer); private void OpenPreviewWindow() { int iHeight = 320; int iWidth = 200; // // Open Preview window in picturebox // hHwnd = capCreateCaptureWindowA(iDevice.ToString(), (WS_VISIBLE | WS_CHILD), 0, 0, 640, 480, picCapture.Handle.ToInt32(), 0); // // Connect to device // if (SendMessage(hHwnd, WM_CAP_DRIVER_CONNECT, iDevice, 0) == 1) { // // Set the preview scale // SendMessage(hHwnd, WM_CAP_SET_SCALE, 1, 0); // // Set the preview rate in milliseconds // SendMessage(hHwnd, WM_CAP_SET_PREVIEWRATE, 66, 0); // // Start previewing the image from the camera // SendMessage(hHwnd, WM_CAP_SET_PREVIEW, 1, 0); // // Resize window to fit in picturebox // SetWindowPos(hHwnd, HWND_BOTTOM, 0, 0, iWidth, iHeight, (SWP_NOMOVE | SWP_NOZORDER)); } else { // // Error connecting to device close window // DestroyWindow(hHwnd); } } private void ClosePreviewWindow() { // // Disconnect from device // SendMessage(hHwnd, WM_CAP_DRIVER_DISCONNECT, iDevice, 0); // // close window // DestroyWindow(hHwnd); } #endregion protected void Page_Load(object sender, EventArgs e) { } protected void btnStart_Click(object sender, EventArgs e) { int iDevice = int.Parse(device_number_textBox.Text); OpenPreviewWindow(); } } }

    Read the article

  • Starting to make progress Was [MediaRecorder prepare() causes segfault]

    - by dwilde1
    Folks, I have a situation where my MediaRecorder instance causes a segfault. I'm working with a HTC Hero, Android 1.5+APIs. I've tried all variations, including 3gpp and H.263 and reducing the video resolution to 320x240. What am I missing? The state machine causes 4 MediaPlayer beeps and then turns on the video camera. Here's the pertinent source: UPDATE: ADDING SURFACE CREATE INFO I have rebooted the device based on previous answer to similar question. UPDATE 2: I seem to be following the MediaRecorder state machine perfectly, and if I trap out the MR code, the blank surface displays perfectly and everything else functions perfectly. I can record videos manually and play back via MediaPlayer in my code, so there should be nothing wrong with the underlying code. I've copied sample code on the surface and surfaceHolder code. I've looked at the MR instance in the Debug perspective in Eclipse and see that all (known) variables seem to be instantiated correctly. The setter calls are all now implemented in the exaxct order specced in the state diagram. UPDATE 3: I've tried all permission combinations: CAMERA + RECORD_AUDIO+RECORD_VIDEO, CAMERA only, RECORD_AUDIO+RECORD_VIDEO This is driving me bats! :))) UPDATE 4: starting to work... but with puzzling results. Based on info in bug #5050, I spaced everything out. I have now gotten the recorder to actually save a snippet of video (a whole 2160 bytes!), and I did it by spacing the view visibility, prepare() and start() w.a.a.a.a.a.y out (like several hundred milliseconds for each step). I think what happens is that either bringing the surface VISIBLE has delayed processing or else the start() steps on the prepare() operation before it is complete. What is now happening, however, is that my simple timer tickdown counter is getting clobbered. Is it now that the preview and save operations are causing my main process thread to become unavailable? I'm recording only 10fps at 176x144. Referencing the above code, I've added a timer tickdown after setPreviewDisplay(), prepare() and start(). As I say, it now functions to some degree, but the results still have anomalies. // in activity class definition protected MediaPlayer mPlayer; protected MediaRecorder mRecorder; protected boolean inCapture = false; protected int phaseCapture = 0; protected int durCapturePhase = INF; protected SurfaceView surface; protected SurfaceHolder surfaceHolder; // in onCreate() // panelPreview is an empty LinearLayout surface = new SurfaceView(getApplicationContext()); surfaceHolder = surface.getHolder(); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); panelPreview.addView(surface); // in timer handler runnable if (mRecorder == null) mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile(path + "/" + vlip); mRecorder.setVideoSize(320, 240); mRecorder.setVideoFrameRate(15); mRecorder.setPreviewDisplay(surfaceHolder.getSurface()); panelPreview.setVisibility(LinearLayout.VISIBLE); mRecorder.prepare(); mRecorder.start(); Here is a complete log trace for the process run and crash: I/ActivityManager( 80): Start proc com.ejf.convince.jenplus for activity com.ejf.convince.jenplus/.JenPLUS: pid=17738 uid=10075 gids={1006, 3003} I/jdwp (17738): received file descriptor 10 from ADB W/System.err(17738): Can't dispatch DDM chunk 46454154: no handler defined W/System.err(17738): Can't dispatch DDM chunk 4d505251: no handler defined I/WindowManager( 80): Screen status=true, current orientation=-1, SensorEnabled=false I/WindowManager( 80): needSensorRunningLp, mCurrentAppOrientation =-1 I/WindowManager( 80): Enabling listeners W/ActivityThread(17738): Application com.ejf.convince.jenplus is waiting for the debugger on port 8100... I/System.out(17738): Sending WAIT chunk I/dalvikvm(17738): Debugger is active I/AlertDialog( 80): [onCreate] auto launch SIP. I/WindowManager( 80): onOrientationChanged, rotation changed to 0 I/System.out(17738): Debugger has connected I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): debugger has settled (1370) I/ActivityManager( 80): Displayed activity com.ejf.convince.jenplus/.JenPLUS: 5186 ms I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/AudioHardwareMSM72XX( 2696): AUDIO_START: start kernel pcm_out driver. W/AudioFlinger( 2696): write blocked for 96 msecs I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 W/AuthorDriver( 2696): Intended width(640) exceeds the max allowed width(352). Max width is used instead. W/AuthorDriver( 2696): Intended height(480) exceeds the max allowed height(288). Max height is used instead. I/AudioHardwareMSM72XX( 2696): AudioHardware pcm playback is going to standby. I/DEBUG (16094): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG (16094): Build fingerprint: 'sprint/htc_heroc/heroc/heroc: 1.5/CUPCAKE/85027:user/release-keys' I/DEBUG (16094): pid: 17738, tid: 17738 com.ejf.convince.jenplus Thanks in advance! -- Don Wilde http://www.ConvinceProject.com

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >