Search Results

Search found 1661 results on 67 pages for 'peter stuart'.

Page 3/67 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • ubuntu 10.10 does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • A story from SQLvdb and Idera

    - by Peter Larsson
    A year or so back, I struggled with some consistency problems so I figured out I needed a way to "mount" backup files as a virtual database. At the time (SQL Server 2005 and SQL Server 2008) my choice fell on Idera's SQLvdb because it felt easy enough to use. I used it a few times and it worked great. Some time later we upgraded to SQL Server 2008R2 and I didn't use SQLvbd for a long time. Until yesterday... I was upset that suddenly SQLvbd took more than 2 hours to mount the backup file (if it succeeded at all). I uninstalled the application and went for lunch. After lunch, I decided to give SQLvbd another chance so I emailed their tech support and got a response within 30 minutes or so. Now, since I am a SQL Server MVP, they gave me another serial number than my first and I downloaded and installed a newer version. But also this version was really slow. I emailed back to them with the additional information they requested and to my surprise I had got an email this morning when I came back to work, where Idera explained some of the issues (bugs) and asked my to test a newer version. I did, and now a fresh mount of a 100GB database (compressed to 20GB with native compression) located on our SAN takes less than 6 minutes! Thank you. //Peter

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • FireBug not working with ASP.NET MVC

    - by Stuart
    Hi All I have started a new ASP.NET MVC project and have included the ExtJS library files. This all works fine when built from Visual Studio and i can display various ExtJS objects. The problem however is that FireBug has stopped showing errors in the Console, even when i type nonsense into the code block. Setup is: VS 2008 SP1 FireBug 1.4.2 FireFox 3.5.2 Firebug is working with other public web sites and when viewing other PHP based sites I'm developing. Does anyone have any ideas what might be causing this? Thanks for any suggestions Stuart

    Read the article

  • Using LINQ to SQL with multiple databases

    - by Stuart Ferguson
    I am working on a new project and hoping to use LINQ to SQL for the data access but have come across the following issue. I need to have my application access 3 databases with similar but not the same table structure, for example Database1 and Database 2 has a table called tblCustomer with 2 columns CustomerKey and CustomerName Database2 has a table called tblCustomer with 3 columns CustomerKey, CustomerName and CustomerPostCode I am looking for a solution that will allow me a query all three databases without the need for 3 GetCustomerList functions as Database1 and Database2 could use the same function as are the same structure, with an override function for database 3 to bring back the additional field. Is there a way i can declare a base datacontext class to handle Database 1 and 2 with an inherited version for Database 3. Thanks In Advance Stuart Ferguson

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Where can I find the yum repos files for centos 4.7 i386?

    - by Peter Kahn
    Does anyone know where I can find the i386 centos 4.7 repos files that would normally sit in /etc/yum.repos.d. Alas, it looks like someone copied the 5.5 edition over to a 4.7 system. I can setup a new VM, install 4.7 and extract the files from that system (but I was hoping for a faster approach. Please let me know if you know where these files live on the net. I'm off to RPMfind to see what I can locate. Thanks Peter

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • Best methods for Lazy Initialization with properties

    - by Stuart Pegg
    I'm currently altering a widely used class to move as much of the expensive initialization from the class constructor into Lazy Initialized properties. Below is an example (in c#): Before: public class ClassA { public readonly ClassB B; public void ClassA() { B = new ClassB(); } } After: public class ClassA { private ClassB _b; public ClassB B { get { if (_b == null) { _b = new ClassB(); } return _b; } } } There are a fair few more of these properties in the class I'm altering, and some are not used in certain contexts (hence the Laziness), but if they are used they're likely to be called repeatedly. Unfortunately, the properties are often also used inside the class. This means there is a potential for the private variable (_b) to be used directly by a method without it being initialized. Is there a way to make only the public property (B) available inside the class, or even an alternative method with the same initialized-when-needed?

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • What's the best way of marketing to programmers?

    - by Stuart
    Disclaimer up front - I'm definitely not going to include any links in here - this question isn't part of my marketing! I've had a few projects recently where the end product is something that developers will use. In the past I've been on the receiving end of all sorts of marketing - as a developer I've gotten no end of junk - 1000s of pens, tee-shirts and mouse pads; enough CDs to keep my desk tea-free; some very useful USB keys with some logos I no longer recognise; a small forest's worth of leaflets; a bulging spam folder full of ignored emails, etc... So that's my question - What are good ways to market to developers? And as an aside - are developers the wrong people to target? - since we so often don't have a purchasing budget anyways!

    Read the article

  • Oracle ACEs in the House

    - by Justin Kestelyn
    As is customary, the Oracle ACEs have invaded the Oracle Develop Conference agenda.Why? Because Oracle ACE-dom inherently is a stamp of not only expertise, but a unique ability to make that expertise useful to others. Plus, they're a group of "fine blokes" (UK. subjects, educate me: is that really a word?)Perhaps if you're not able to catch one of these sessions, you will be able to see the applicable ACE in action elsewhere, at a conference or user group meeting near you. Session ID Session Title Speaker, Company S313355 Developing Large Oracle Application Development Framework 11g Applications Andrejus Baranovskis, Red Samurai Consulting S316641 Xenogenetics for PL/SQL: Infusing with Java Best Practices and Design Patterns Lucas Jellema, AMIS; Alex Nuijten, AMIS S317171 Building Secure Multimedia Web Applications: Tips and Techniques Marcel Kratochvil, Piction; Melliyal Annamalai, Oracle S315660 Database Applications Lifecycle Management Marcelo Ochoa, Facultad de Ciencias Exactas S315689 Building a High-Performance, Low-Bandwidth Web Architecture Paul Dorsey, Dulcian, Inc. S316003 Managing the Earthquake: Surviving Major Database Architecture Changes Paul Dorsey, Dulcian, Inc.; Michael Rosenblum, Dulcian, Inc. S314869 Introduction to Java: PL/SQL Developers Take Heart Peter Koletzke, Quovera S316184 Deploying Applications to Oracle WebLogic Server Using Oracle JDeveloper Peter Koletzke, Quovera; Duncan Mills, Oracle S316597 Using Collections in Oracle Application Express: The Definitive Intro Raj Mattamal, Niantic Systems, LLC S313382 Using Oracle Database 11g Release 2 in an Oracle Application Express Environment Roel Hartman, Logica S313757 Debugging with Oracle Application Express and Oracle SQL Developer Dimitri Gielis, Sumneva S313759 Using Oracle Application Express in Big Projects with Many Developers Dimitri Gielis, Sumneva S313982 Forms2Future: The Ongoing Journey into the Future for Oracle-Based Organizations Lucas Jellema, AMIS; Peter Ebell, AMIS

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • MySQL - Installation

    - by Stuart Brierley
    In order to create a development environment for a project I am working on, I recently needed to install MySQL Server.  The first step was to download the msi. Running this presents you with the installer splash screen, detailing the version of MySQL that you are about to install - in this case MySQL Server 5.1. Next you can choose whether to install a Typical, Complete or Custom installation.  Although this is the first time I have installed MySQL and the Custom option states "Recommended for advanced users" I opted to carry out a Custom installation, specifically so that I could be sure of what features and components were installed. On the Custom Setup screen you can choose which components to install.  By default the Developer Components are not included, but I opted to include some of these elements. Next up is the ready to install screen and then the intsallation progress.   Following the completion of the installation you are shown a few screens with details of the MySQL Enterprise subscription option. Finally the installation is complete and you are offered the chance to configure and register MySQL. Next I will be looking at the configuration of MySQL Server 5.1.

    Read the article

  • BizTalk - Removing BAM Activities and Views using bm.exe

    - by Stuart Brierley
    Originally posted on: http://geekswithblogs.net/StuartBrierley/archive/2013/10/16/biztalk---removing-bam-activities-and-views-using-bm.exe.aspxOn the project I am currently working on, we are making quite extensive use of BAM within our growing number of BizTalk applications, all of which are being deployed and undeployed using the excellent Deployment Framework for BizTalk 5.0.Recently I had an issue where problems on the build server had left the target development servers in a state where the BAM activities and views for a particular application were not being removed by the undeploy process and unfortunately the definition in the solution had changed meaning that I could not easily recreate the file from source control.  To get around this I used the bm.exe application from the command line to manually remove the problem BAM artifacts - bm.exe can be found at the following path:C:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingC:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingStep1 :Get the BAM Definition FileRun the following command to get the BAm definition file, containing the details of all the activities, views and alerts:bm.exe get-defxml -FileName:{Path and File Name Here}.xmlStep 2: Remove the BAM ArtifactsAt this stage I chose to manually remove each of my problem BAM activities and views using seperate command line calls.  By looking in the definition file I could see the names of the activities and views that I wanted to remove and then use the following commands to remove first the views and then the activities:bm.exe remove-view -name:{viewname}bm.exe remove-activity -name:{activityname}

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard

    - by Stuart Brierley
    Recently I decided to try out the BizTalk Server Pipeline Component Wizard when creating a new pipeline component for BizTalk 2009. There are different versions of the wizard available, so be sure to download the appropriate version for the BizTalk environment that you are working with. Following the download and expansion of the zip file, you should be left with a Visual Studio solution.  Open this solution and build the project. Following this installation is straight foward - locate and run the built setup.exe file in the PipelineComponentWizard Setup project and click through the small number of installation screens. Once you have completed installation you will be ready to use the wizard in Visual Studio to create your BizTalk Pipeline Component. Start by creating a new project, selecting BizTalk Projects then BizTalk Server Pipeline Component.  You will then be presented with the splash screen. The next step is General Setup, where you will detail the classname, namespace, pipeline and component types, and the implementation language for your Pipeline Component. The options for pipeline type are Receive, Send or Any. Depending on the pipeline type chosen there are different options presented for the component type, matching those available within the BizTalk Pipelines themselves: Receive - Decoder, Disassembling Parser, Validate, Party Resolver, Any. Send -  Encoder, Assembling Serializer, Any. Any - Any. The options for implementation language are C# or VB.Net Next you must set up the UI settings - these are the settings that affect the appearance of the pipeline component within Visual Studio. You must detail the component name, version, description and icon.  Next is the definition of the variables that the pipeline component will use.  The values for these variables will be defined in Visual Studio when creating a pipeline. The options for each variable you require are: Designer Property - The name of the variable. Data Type - String, Boolean, Integer, Long, Short, Schema List, Schema With None Clicking finish now will complete the wizard stage of the creation of your pipeline component. Once the wizard has completed you will be left with a BizTalk Server Pipeline Component project containing a skeleton code file for you to complete.   Within this code file you will mainly be interested in the execute method, which is left mostly empty ready for you to implement your custom pipeline code:          #region IComponent members         /// <summary>         /// Implements IComponent.Execute method.         /// </summary>         /// <param name="pc">Pipeline context</param>         /// <param name="inmsg">Input message</param>         /// <returns>Original input message</returns>         /// <remarks>         /// IComponent.Execute method is used to initiate         /// the processing of the message in this pipeline component.         /// </remarks>         public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)         {             //             // TODO: implement component logic             //             // this way, it's a passthrough pipeline component             return inmsg;         }         #endregion Once you have implemented your custom code, build and compile your Custom Pipeline Component then add the compiled .dll to C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components . When creating a new pipeline, in Visual Studio reset the toolbox and the custom pipeline component should appear ready for you to use in your Biztalk Pipeline. Drop the pipeline component into the relevant pipeline stage and configure the component properties (the variables defined in the wizard). You can now deploy and use the pipeline as you would any other custom pipeline.

    Read the article

  • MySQL - ODBC Data Connector

    - by Stuart Brierley
    Having previsouly installed and then configured MySQL, you may now need to install the ODBC Data Connector driver in order to connect to your MySQL database. Following the Splash screen the first thing to choose is the Setup Type for your installation.  As usual I chose custom so that I could see the components that were actually being installed.  In this case the custom set up screen allows you to choose to install the driver and the documentation. Finally you can complete the installation Assuming it completes okay you have now installed the MySQL ODBC driver. My intention for installing all these MySQL components is so that I can now attempt to get BizTalk 2009 talking to the MySQL database for a solution that I am currently working on.  For this I will next be looking at the Community ODBC Adapter.

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • SAB BizTalk Archiving Pipeline Component - Codeplex

    - by Stuart Brierley
    In an effort to give a little more to the BizTalk development community, I have created my first Codeplex project. The SAB BizTalk Archiving Pipeline Component was written using Visual Studio 2010 with BizTalk Server 2010 intended as the target platform.  It is currently at version 0.1, meaning that I have not yet completed all the intended functionality and have so far carried out a limited number of tests.  It does however archive files within the bounds of the functionailty so far implemented and seems to be stable in use. It is based on a recent evolution of a basic archiving component that I wrote in the past, and it is my hope that it will continue to evolve in the coming months. This work was inspired by some old posts by Gilles Zunino and Jeff Lynch.   You can download the documentation, source code or component dll from Codeplex, but to give you a taste here is the first section of the documentation to whet your appetite: SAB BizTalk Archiving Pipeline Component   The SAB BizTalk Archiving Pipeline Component has been developed to allow custom piplelines to be created that can archive messages at any stage of pipeline processing.   It works in both receive and send pipelines and will archive messages to file based on the configuration applied to the component in the BizTalk Administration Console.   The Archiving Pipeline Component has been coded for use with BizTalk Server 2010. Use with other versions of BizTalk has not been tested.   The Archiving Pipeline component is supplied as a dll file that should be placed in the BizTalk Server Pipeline Components folder. It can then be used when developing custom pipelines to be deployed as a part of your BizTalk Server applications.   This version of the component allows you to use a number of generic messaging macros and also a small number that are specific to the FILE adapter. It is intended to extend these macros to cover context properties from other adapters in future releases.     Archive Pipeline Parameters As with all pipeline components, the following parameters can be set when creating your custom pipeline and at runtime via the administration console.   Enabled:              Enables and disables the archive process.                                 True; messages will be archived.   False; messages will be passed to the next stage in the pipeline without performing any processing.   File Name:          The file name of the archived message.   Allows the component to build the archive filename at run-time; based on the values entered, the permitted macros and data extracted from the message context properties.   e.g.        %FileReceivedFileName%-%InterchangeSequenceNumber%   File Mask:           The extension to be added to the File Name following all Macro processing.   e.g.        .xml   File Path:             The path on which the archived message should be saved.   Allows the component to build the archive directory at run-time; based on the values entered, permitted macros and data extracted from the message context properties.   e.g.        C:\Archive\%ReceivePortName%\%Year%\%Month%\%Day%\                   \\ArchiveShare\%ReceivePortName%\%Date%\     Overwrite:          Enables and disables existing file overwrites.   True; any existing file with the same File Path/Name combination (following macro replacement) will be overwritten.   False; any existing file with the same File Path/Name combination (following macro replacement) will not be overwritten.  The current message will be archived with a GUID appended to the File Name.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >