Monthly Archives

Articles indexed in February 2011

Page 62/314 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • A file's web address is different from the local file structure

    - by allanb
    I'm not sure how best to describe this (as you can clearly tell from the title) so I'll give you an example: I have a multisite Drupal installation. Each of the sites' sitemap.xml files are located on the server at /sites/example.com/files though with a browser (and to search engines) it is accessible at example.com/sitemap.xml I was wondering how this was achieved? Is this called Virtual Directory? Thank you

    Read the article

  • How to correct my null exception??

    - by kostas
    hi!i have created a contact form using c# and web services.i would like to get an alert message if the user hasnt filled his name or when his name is a number.this is my c# code: public partial class Default2 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { Validation.WebService validate = new Validation.WebService(); bool ismail = validate.isEmail(TextBox2.Text); if (!ismail) { Label1.Text = "your mail is wrong!!"; } Validation.nameVal valid = new Validation.nameVal(); bool isname = valid.isName(TextBox1.Text); if (!isname ) { Label2.Text = "Your name is wrong!!"; } else if (isname==null) { Label2.Text = "Please fill in your name"; } if (isname && ismail) { { Label1.Text = null; Label2.Text = null; Label3.Text = "Your message has been send!";} } } } with this code i have a null exception..

    Read the article

  • Php writing to file - empty ?

    - by The Devil
    Hey, I've been struggling with writing a single string into a file. I'm using just a simple code under Slackware 13: $fp = fopen('/my/absolute/path/data.txt', 'w'); fwrite($fp, 'just a testing string...'); fclose($fp); The file gets created (if it's not already created) but it's empty ?! The directory in which this file is written is owned by apache's user & group (daemon.daemon) and has 0777 permissions. This has never happened to me before. I'm curious what's the reason I'm not able to write inside the file ? Thanks in advance.

    Read the article

  • No route matches when trying to edit

    - by mmichael
    Here's the scoop: I've created a test app that allows users to create ideas and then add "bubbles" to these ideas. Currently, a bubble is just text. I've successfully linked bubbles to ideas. Furthermore, when a user goes to view an idea it lists all of the bubbles attached to that idea. The user can even delete the bubble for any given idea. My problem lies in editing bubbles. When a user views an idea, he sees the idea's content as well as any bubbles for that idea. As a result, I've set all my bubble controls (editing and deleting) inside the ideas "show" view. My code for editing a bubble for an idea is <%= link_to 'Edit Bubble', edit_idea_bubble_path %>. I ran rake routes to find the correct path for editing bubbles and that is what was listed. Here's my error: No route matches {:action=>"edit", :controller=>"bubbles"} In my bubbles controller I have: def edit @idea = Idea.find(params[:idea_id]) @bubble = @idea.bubbles.find(params[:id]) end def update @idea = Idea.find(params[:idea_id]) @bubble = @idea.bubbles.find(params[:id]) respond_to do |format| if @bubble.update_attributes(params[:bubble]) format.html { redirect_to(@bubble, :notice => 'Bubble was successfully updated.') } format.xml { head :ok } else format.html { render :action => "Edit" } format.xml { render :xml => @bubble.errors, :status => :unprocessable_entity } end end end To go a step further, I have the following in my routes.rb file resources :ideas do resources :bubbles end So far everything seems to function except when I try to edit a bubble. I'd love some guidance. Thanks!

    Read the article

  • jquery carousel

    - by Adam Wright
    Hi, I'm looking for a jquery carousel to contain image with the following requirements: must run on auto must not pause must be able to pause on image hover must have external controls that once clicked change the direction of the carousel. must be able to fadeout buttons when the carousel is moving in that direction. I've looked at jcaursel and jcaousellite but both have a pause of in auto mode or you have to specify how many items to slide. I just want it to move continously like a news ticker. Any suggestions on how to achieve this or how to customise jcaurousel or jcarouselite?

    Read the article

  • Adding line with text between pattern and next occurence of the same pattern in bash

    - by kasper
    I am writing a bash script that modifies a file that looks like this: --- usr1 --- data data data data data data data data data data data data --- usr2 --- data data data data data data data data --- usr3 --- data data data data --- endline --- One question is: How to add next user line --- usrn --- after last user data lines? Second one is: How to delete specific user data lines (data lines and --- userx ---) i.e. I would like to delete usr2 with all his data set. It must work on bash 2.05 :) and I think it will use awk or sed, but I'm not sure.

    Read the article

  • Unicode data from NSData to NSString

    - by Jeff
    So if I have NSData from an HTTP request, then I do something like this: NSString *test = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; This will result in null if the data contains weird unicode data (title is from reddit): {"title":"click..¦¦me..and..then¦¦________ ¦¦check¦¦_.your...¦¦.__...¦¦____ ¦¦....¦¦¦¦¦¦¦¦¦¦¦¦¦¦....¦¦____ ¦¦¦¦¦¦....¦¦¦¦¦¦....¦¦¦¦¦¦____ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦____ ....¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦______ ........¦¦..._recently....¦¦________ ....¦¦....viewed....links....¦¦_____"}, How would I convert the data to a string? Ideally, it would best if the string wasn't null so I could parse it as JSON, but even a lossy conversion is fine with me in these cases. I'm not familiar with unicode (naive American I am), so any enlightenment about that would be a nice bonus :)

    Read the article

  • Changing variables outside of Scope C#

    - by sam
    Hi, I'm a beginner C# programmer, and to improve my skills I decided to give Project Euler a try. The first problem on the site asks you to find the sum of all the multiples of 3 and 5 under 1000. Since I'm essentially doing the same thing twice, I made a method to multiply a base number incrementally, and add the sum of all the answers togethor. public static int SumOfMultiplication(int Base, int limit) { bool Escape = false; for (int mult = 1; Escape == true; mult++) { int Number = 0; int iSum = 0; Number = Base * mult; if (Number > limit) return iSum; else iSum = iSum + Number; } regardless of what I put in for both parameters, it ALWAYS returns zero. I'm 99% sure it has something to do with the scope of the variables, but I have no clue how to fix it. All help is appreciated. Thanks in advance, Sam

    Read the article

  • need help me in excel-vba

    - by aos
    Private Sub cmdClear_Click() Dim Confirm As Integer Confirm = MsgBox("Are you sure you want clear this Sheet?", vbYesNo, "WARNING: Date Changed") If Confirm = 6 Then Sheets("OPV").Activate 'Sheets("OPV").Activate Sheets("OPV").Range("B4:BZ1000").ClearContents Sheets("OPV").Range("B4:BZ1000").Interior.Pattern = xlNone Sheets("OPV").Activate Sheets("OPV").Range("B4").Activate MsgBox " Done .. ", vbInformation, "Clear ......" End If End Sub

    Read the article

  • sem_open() error: "undefined reference to sem_open()" on linux (Ubuntu 10.10)

    - by Robin
    So I am getting the error: "undefined reference to sem_open()" even though I have include the semaphore.h header. The same thing is happening for all my pthread function calls (mutex, pthread_create, etc). Any thoughts? I am using the following command to compile: g++ '/home/robin/Desktop/main.cpp' -o '/home/robin/Desktop/main.out' #include <iostream> using namespace std; #include <pthread.h> #include <semaphore.h> #include <fcntl.h> const char *serverControl = "/serverControl"; sem_t* semID; int main ( int argc, char *argv[] ) { //create semaphore used to control servers semID = sem_open(serverControl,O_CREAT,O_RDWR,0); return 0; }

    Read the article

  • Generating VS 2005 .vcproj's by hand

    - by Kevin
    I'm working on a script that generates Visual Studio 2005 C++ project files (.vcproj). The script reads a makefile, then spits out a c++ project. INPUT: makefile --- OUTPUT: VS 2005 c++ project (.vcproj) However, when I try to build the auto-generated project in VS 2005, error outputs: "Unspecified Error." Evidently, I am not generating the VS 2005 .vcproj file correctly. Assuming that my c++ project file was malformed, I opened up VS 2005 and made a new C++ project. I actually copied the good, VS 2005-created project file to my non-working, malformed project file. I replaced the Name, Reference Includes (.libs), Compile Includes (.cc, .c), etc. in the good VS 2005 project with my malformed project file's information. However, I still cannot get VS 2005 to compile my .vcproj. Perhaps VS 2005 is very particular about the content of its .vcproj's? Please give me advice on how to manually generate a VS 2005 .vcproj. Thanks!

    Read the article

  • Problem with loading compiled c code in R x64 using dyn.load

    - by Sacha Epskamp
    I recently went from a 32bit laptop to a 64bit desktop (both win7). I just found out that I get an error now when loading dll's using dyn.load. I guess this is a simple mistake and I am overlooking something. For example, I write this simple c function (foo.c): void foo( int *x) {*x = *x + 1;} Then compile it in command prompt: R CMD SHLIB foo.c Then in 32bit R I can use it in R: > setwd("R") > dyn.load("foo.dll") > .C("foo",as.integer(1)) [[1]] [1] 2 but in 64bit R I get: > dyn.load("foo.dll") Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared object 'C:/Users/Sacha/Documents/R/foo.dll': LoadLibrary failure: %1 is not a valid Win32 application. nd.

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • Listview icons show up blurry (C#)

    - by Balk
    I'm attempting to display a "LargeIcon" view in a listview control, however the images I specify are blurry. This is what I have so far: The .png files are 48x48 and that's what I have it set to display at in the ImageList properties. There's one thing that I've noticed (which is probably the cause) but I don't know how to change it. Inside the "Images Collection Editor" where you choose what images you want for the ImageList control, it looks like it's setting the wrong size for each image. As you can see the "PhysicalDimension" and the "Size" is set to 16x16 and not abled to be manipulated. Does anyone have any ideas? Many thanks!

    Read the article

  • Convert Array to Multidimensional Array

    - by Tim
    I like to convert a single array into a multidimensional array. This is what I get have web scraping a page, except it is not the end result that I am looking for. Change: Rooms: Array ( [0] => name [1] => value [2] => size [3] => &nbsp; [4] => name [5] => value [6] => size [7] => &nbsp; [8] => name [9] => value [10] => size [11] => &nbsp; [12] => name [13] => value [14] => size [15] => &nbsp; ) Into: Rooms: Array ( Room: Array ( [0] => name [1] => value [2] => size ), Room: Array ( [0] => name [1] => value [2] => size ), Room: Array ( [0] => name [1] => value [2] => size ) )

    Read the article

  • Does main() need to be in every script containing handlers?

    - by Will Merydith
    Experienced Java programmer trying to learn Python. I have an applicaiton on Google App Engine and want to move my admin Handlers to a separate file. So now I have main.py and admin.py. I've set up app.yaml to route traffic properly, and have added the call to WSGIApplication() in each file to route to the appropriate Handler. My question is does each script file need def main() and the corresponding if statement: application = webapp.WSGIApplication([(r'/admin/(.*)', Admin)], debug=True) def main(): run_wsgi_app(application) if __name__ == '__main__': main()

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • Modify passed, nested dict/list

    - by Gerenuk
    I was thinking of writing a function to normalize some data. A simple approach is def normalize(l, aggregate=sum, norm_by=operator.truediv): aggregated=aggregate(l) for i in range(len(l)): l[i]=norm_by(l[i], aggregated) l=[1,2,3,4] normalize(l) l -> [0.1, 0.2, 0.3, 0.4] However for nested lists and dicts where I want to normalize over an inner index this doesnt work. I mean I'd like to get l=[[1,100],[2,100],[3,100],[4,100]] normalize(l, ?? ) l -> [[0.1,100],[0.2,100],[0.3,100],[0.4,100]] Any ideas how I could implement such a normalize function? Maybe it would be crazy cool to write normalize(l[...][0]) Is it possible to make this work?? Or any other ideas? Also not only lists but also dict could be nested. Hmm... EDIT: I just found out that numpy offers such a syntax (for lists however). Anyone know how I would implement the ellipsis trick myself?

    Read the article

  • JavaScript: Replacement for XMLSerializer.seralizeToString()?

    - by NRaf
    I'm developing a website using the Seam framework and the RichFaces AJAX library (these isn't really all that important to the problem at hand - just some background). I seem to have uncovered a bug, however, in RichFaces which, in certain instances, will cause AJAX-based updating to fail in IE8 (see here for more info: http://community.jboss.org/message/585737). The following is the code where the exception is occurring: var anchor = oldnode.parentNode; if(!window.opera && !A4J.AJAX.isWebkitBreakingAmps() && oldnode.outerHTML && !oldnode.tagName.match( /(tbody|thead|tfoot|tr|th|td)/i ) ){ LOG.debug("Replace content of node by outerHTML()"); if (!Sarissa._SARISSA_IS_IE || oldnode.tagName.toLowerCase()!="table") { try { oldnode.innerHTML = ""; } catch(e){ LOG.error("Error to clear node content by innerHTML "+e.message); Sarissa.clearChildNodes(oldnode); } } oldnode.outerHTML = new XMLSerializer().serializeToString(newnode); } The last line (the one with XMLSerializer) is where the exception is occurring in IE. I was wondering if anyone knows of any replacement method / library / etc I could use there (only on IE is fine). Thanks.

    Read the article

  • Weakly connected tree

    - by wow_22
    hello I have an algorithmic problem using a weakly connected tree T where w(T)=sum(w(e)) for each edge e,by w i declare weight and i have to prove that we can use prim and Kruskal algorithm while w(T)=max{w(e)} maximum between any edge e belongs at T (I proved that) but i have also to prove the same for w(T)=?(w(e)) while ? states product of all edges belongs at T i tried a lot to prove it but i did not came up with a result that proving or disapproving the use of prim ,kruskal any help will be more than appreciated thanks

    Read the article

  • Membership in ASP.Net applications - part 1

    - by nikolaosk
    So far in all my posts, I have never mentioned anything about how to implement authentication/authorisation mechanisms in a web site. In all our professional web applications we do need some sort of mechanism to verify who are users are and what privileges have in our site. This is the first post in a series of posts investigating how to implement membership (authentication+authorisation) in ASP.Net applications. We will look into the built-in web server security controls.We will look at the built...(read more)

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • An error when installing MVVM Light templates for VS10 Express

    - by Laurent Bugnion
    If you tried to install the Visual Studio 2010 Express project and item templates of MVVM Light V3 SP1 (with the Windows Phone tools hotfix), you may have encountered an error when unzipping the package, telling you that a file was corrupted. This error was reported to me a couple of times, and I was able to reproduce the issue. I just published a new version that takes care of this issue. If you encountered that error when installing, please download the new version and try again. Note that the error was only there for the Visual Studio 2010 Express version of the templates. No other changes were made.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Upgrading Code from 2007 to 2010

    - by MOSSLover
    So I’ve been doing some upgrades just to see if things will work from 2007 to 2010.  So far most of the stuff I want works, but obviously there are some things that break.  Did you guys know that in 2007 you could add a webpart to the view pages for lists and libraries without losing the toolbar?  In 2010 the ribbon disappears every time you add a webpart.  So if you are using Scot Hillier’s Codeplex project to hide buttons it will not work the same way, because the ribbon is going to disappear altogether. I have also learned another reason why standalone installations are the bane of my existence.  Nine times out of ten the installation is done using Network Service as the application pool account.  You are wondering why is this bad?  Well, let’s just say the site collection administrator with local admin rights wants to attach the IIS Worker process and debug say a webpart.  Visual Studio 2010 will throw a nasty error that tells you that you are not an administrator.  You will say, but I am an administrator?  I have all the correct group permissions on the server and on SQL and in SharePoint.  Then you will go in and decide let’s add my own admin account just to see if I can attach the debugger and you will notice that works properly.  So the morale of the story is create a separate account on your development environment to run all the SharePoint Services and such.  You don’t need to go all out and create the best practices amount of accounts if it’s just your dev environment.  I would at least create one single account to run all your SharePoint process (Services, SQL, and App Pool).  Also, don’t run a standalone install unless you want to kill kittens (this is a quote from Todd Klindt).  We love kittens they are cute and awesome.  Besides you learn more if you click Complete and just skip standalone.  You will learn how to setup SQL Server 2008 and you will learn how to configure your environment.  It will help you in the long run.  So I have ranted enough for today I figure these are enough tidbits for you this time around.  The two of you who read my blog and I know some of you are friends who don’t understand SharePoint.  I might as well have just done “wahwahwahwah” in Charlie Brown adult speak.  Thanks for reading as usual.  I’ll catch you all when I complain more about the upgrade process and share more tidbits, which will inevitably become a presentation at a conference or two. Technorati Tags: Upgrade Code SharePoint 2007 to 2010,Visuaul Studio 2010,SharePoint 2010

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >