Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 881/1086 | < Previous Page | 877 878 879 880 881 882 883 884 885 886 887 888  | Next Page >

  • Problem with sitecore home page

    - by Mirage
    hi guys i am new to site core and asp.net and iis. I have installed all on my server 2008. When i got to /localhost/sitecore/Website/sitecore. i get following error. Can anyone help me what is this and what should i need to do Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Source Error: Line 2575: <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> Line 2576: </httpHandlers> Line 2577: <membership defaultProvider="sitecore"> -- this line shows errr Line 2578: <providers> Line 2579: <clear />

    Read the article

  • Display ñ on a C# .NET application

    - by mmr
    I have a localization issue. One of my industrious coworkers has replaced all the strings throughout our application with constants that are contained in a dictionary. That dictionary gets various strings placed in it once the user selects a language (English by default, but target languages are German, Spanish, French, Portuguese, Mandarin, and Thai). For our test of this functionality, we wanted to change a button to include text which has a ñ character, which appears both in Spanish and in the Arial Unicode MS font (which we're using throughout the application). Problem is, the ñ is appearing as a square block, as if the program did not know how to display it. When I debug into that particular string being read from disk, the debugger reports that character as a square block as well. So where is the failure? I think it could be in a few places: 1) Notepad may not be unicode aware, so the ñ displayed there is not the same as what vs2008 expects, and so the program interprets the character as a square (EDIT: notepad shows the same characters as vs; ie, they both show the ñ. In the same place.). 2) vs2008 can't handle ñ. I find that very, very hard to believe. 3) The text is read in properly, but the default font for vs2008 can't display it, which is why the debugger shows a square. 4) The text is not read in properly, and I should use something other than a regular StreamReader to get strings. 5) The text is read in properly, but the default String class in C# doesn't handle ñ well. I find that very, very hard to believe. 6) The version of Arial Unicode MS I have doesn't have ñ, despite it being listed as one of the 50k characters by http://www.fileinfo.info. Anything else I could have left out? Thanks for any help!

    Read the article

  • There's a black hole in my server (TcpClient, TcpListener)

    - by Matías
    Hi, I'm trying to build a server that will receive files sent by clients over a network. If the client decides to send one file at a time, there's no problem, I get the file as I expected, but if it tries to send more than one I only get the first one. Here's the server code: I'm using one Thread per connected client public void ProcessClients() { while (IsListening) { ClientHandler clientHandler = new ClientHandler(listener.AcceptTcpClient()); Thread thread = new Thread(new ThreadStart(clientHandler.Process)); thread.Start(); } } The following code is part of ClientHandler class public void Process() { while (client.Connected) { using (MemoryStream memStream = new MemoryStream()) { int read; while ((read = client.GetStream().Read(buffer, 0, buffer.Length)) > 0) { memStream.Write(buffer, 0, read); } if (memStream.Length > 0) { Packet receivedPacket = (Packet)Tools.Deserialize(memStream.ToArray()); File.WriteAllBytes(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory), Guid.NewGuid() + receivedPacket.Filename), receivedPacket.Content); } } } } On the first iteration I get the first file sent, but after it I don't get anything. I've tried using a Thread.Sleep(1000) at the end of every iteration without any luck. On the other side I have this code (for clients) . . client.Connect(); foreach (var oneFilename in fileList) client.Upload(oneFilename); client.Disconnect(); . . The method Upload: public void Upload(string filename) { FileInfo fileInfo = new FileInfo(filename); Packet packet = new Packet() { Filename = fileInfo.Name, Content = File.ReadAllBytes(filename) }; byte[] serializedPacket = Tools.Serialize(packet); netStream.Write(serializedPacket, 0, serializedPacket.Length); netStream.Flush(); } netStream (NetworkStream) is opened on Connect method, and closed on Disconnect. Where's the black hole? Can I send multiple objects as I'm trying to do? Thanks for your time.

    Read the article

  • Extending spring based app

    - by pitr
    I have a spring-based Web Service. I now want to build a sort of plugin for it that extends it with beans. What I have now in web.xml is: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/classes/*-configuration.xml</param-value> </context-param> My core app has main-configuration.xml which declares its beans. My plugin app has plugin-configuration.xml which declares additional beans. Now when I deploy, my build deploys plugin.jar into /WEB-INF/lib/ and copies plugin-configuration.xml into /WEB-INF/classes/ all under main.war. This is all fine (although I think there could be a better solution), but when I develop the plugin, I don't want to have two projects in Eclipse with dependencies. I wish to have main.jar that I include as a library. However, web.xml from main.jar isn't automatically discovered. How can I do this? Bean injection? Bean discovery of some sort? Something else? Note: I expect to have multiple different plugins in production, but development of each of them will be against pure main.jar Thank you.

    Read the article

  • Params order in Foo.new(params[:foo]), need one before the other (Rails)

    - by Jeena
    I have a problem which I don't know how to fix. It has to do with the unsorted params hash. I have a object Reservation which has a virtual time= attribute and a virtual eating_session= attribute when I set the time= I also want to validate it via an external server request. I do that with help of the method times() which makes a lookup on a other server and saves all possible times in the @times variable. The problem now is that the method times() needs the eating_session attribute to find out which times are valid, but rails sometimes calls the times= method first, before there is any eating_session in the Reservation object when I just do @reservation = Reservation.new(params[:reservation]) class ReservationsController < ApplicationController def new @reservation = Reservation.new(params[:reservation]) # ... end end class Reservation < ActiveRecord::Base include SoapClient attr_accessor :date, :time belongs_to :eating_session def time=(time) @time = times.find { |t| t[:time] == time } end def times return @times if defined? @times @times = [] response = call_soap :search_availability { # eating_session is sometimes nil :session_id => eating_session.code, # <- HERE IS THE PROBLEM :dining_date => date } response[:result].each do |result| @times << { :time => "#{DateTime.parse(result[:time]).strftime("%H:%M")}", :correlation_data => result[:correlation_data] } end @times end end I have no idea how to fix this, any help is apriciated.

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • How can I filter a JTable?

    - by Jonas
    I would like to filter a JTable, but I don't understand how I can do it. I have read How to Use Tables - Sorting and Filtering and I have tried with the code below, but with that filter, no rows at all is shown in my table. And I don't understand what column it is filtered on. private void myFilter() { RowFilter<MyModel, Object> rf = null; try { rf = RowFilter.regexFilter(filterFld.getText(), 0); } catch (java.util.regex.PatternSyntaxException e) { return; } sorter.setRowFilter(rf); } MyModel has three columns, the first two are strings and the last column is of type Integer. How can I apply the filter above, consider the text in filterFld.getText() and only filter the rows where the text is matched on the second column? I would like to show all rows that starts with the text specified by filterFld.getText(). I.e. if the text is APP then the JTable should contain the rows where the second column starts with APPLE, APPLICATION but not the rows where the second column is CAR, ORANGE. I have also tried with this filter: RowFilter<MyModel, Integer> itemFilter = new RowFilter<MyModel, Integer>(){ public boolean include(Entry<? extends MyModel, ? extends Integer> entry){ MyModel model = entry.getModel(); MyItem item = model.getRecord(entry.getIdentifier()); if (item.getSecondColumn().startsWith("APP")) { return true; } else { return false; } } }; How can I write a filter that is filtering the JTable on the second column, specified by my textfield?

    Read the article

  • Java Runtime command line Process

    - by AEIOU
    I have a class with the following code: Process process = null; try { process = Runtime.getRuntime().exec("gs -version"); System.out.println(process.toString()); } catch (Exception e1) { e1.printStackTrace(); } finally { process.destroy(); } I can run "gs -version" on my command line and get: GPL Ghostscript 8.71 (2010-02-10) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. So I know I have the path at least set somewhere. I can run that class from command line and it works. But when I run it using eclipse I get the following error: java.io.IOException: Cannot run program "gs": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at java.lang.Runtime.exec(Runtime.java:431) at java.lang.Runtime.exec(Runtime.java:328) at clris.batchdownloader.TestJDBC.main(TestJDBC.java:17) Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(UNIXProcess.java:53) at java.lang.ProcessImpl.start(ProcessImpl.java:91) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 4 more In my program, i can replace "gs" with: "java", "mvn", "svn" and it works. But "gs" does not. It's only in eclipse I have this problem. Any ideas, on what I need to do to resolve this issue?

    Read the article

  • ajax delay load UserControl asp.net

    - by user196202
    regarding ajax delay load of usercontrols (or any controls) on Post at Encosia.com : http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/ I tried to implement it , but I noticed that it can be done only for simple controls or UserControls that Have simple asp.net controls (or html tags) . But when it involved with advanced dynamic ajax control (like ajaxControlToolkit or Telerik controls) that have javascripts inside them This method of injecting the html code to the .InnerHtml property of div tag (for example) IS NOT WORKING , and I red about it that The browser need to load the script on load and after that it won't iterperate the scripts injectd via .InnerHtml. So I attached here example of delay load project (from encosia.com by dave ward) with my modification (look at DefaultPopup.aspx and beforePopup.aspx and AfterPopup.aspx) Which I modified the RssReader to show listview with popup items (which is implemented via ACT HoverMenuExtender ) So in the regular way the popup items are shown right , but on the delay load which is done by creating virtual page for rendering the html and injecting it to .InnerHtml property – This ISN'T WORKING. So my question is : is there a way to do delay loading for controls which include scripts lik ACT and Telerik and others? And for the ajax templates – if I need to inject advanced control to the page – how I do it with your approach? Thanks very much (I can't attach here files so everyone please ask me by mail ([email protected]) and i'll send it to him. ) Zahi Kramer

    Read the article

  • Error in {% markdown %} filter in Django Nonrel

    - by Robert Smith
    I'm having trouble using Markdown in Django Nonrel. I followed this instructions (added 'django.contrib.markup' to INSTALLED_APPS, include {% load markup %} in the template and use |markdown filter after installing python-markdown) but I get the following error: Error in {% markdown %} filter: The Python markdown library isn't installed. In this line: /path/to/project/django/contrib/markup/templatetags/markup.py in markdown they will be silently ignored. """ try: import markdown except ImportError: if settings.DEBUG: raise template.TemplateSyntaxError("Error in {% markdown %} filter: The Python markdown library isn't installed.") ... return force_unicode(value) else: # markdown.version was first added in 1.6b. The only version of markdown # to fully support extensions before 1.6b was the shortlived 1.6a. if hasattr(markdown, 'version'): extensions = [e for e in arg.split(",") if e] It seems obvious that import markdown is causing the problem but when I run: $ python manage.py shell >>> import elementtree >>> import markdown everthing works alright. Running Markdown 2.0.3, Django 1.3.1, Python 2.7. UPDATE: I thought maybe this was an issue related to permissions, so I changed my project via chmod 777 -R, but it didn't work. Ideas? Thanks!

    Read the article

  • jquery ajax post canceled

    - by hsemu
    I want to track the mouse click events on a set of UI components on a set of pages. To do this, I am using the following jquery/ajax call(trimmed out u): 1.Ajax call which will add the click logging. myClickLogger = { endpoint: '/path/to/my/logging/endpoint.html', logClickEvent: function(clickCode) { $.ajax({ 'type': 'POST', 'url': this.endpoint, 'async': true, 'cache': false, 'global': false, 'data': { 'clickCode':clickCode }, 'error': function(xhr,status,err){ alert("DEBUG: status"+status+" \nError:"+err); }, 'success': function(data){ if(data.status!=200){ alert("Error occured!"); } } }); } }; 2.JQuery click event which will call the ajax logger(the clickCode is an identifier for which button/image was clicked): $(document).ready(function() { $(".myClickEvent[clickName]").click(function() { var clickCode = $(this).attr("clickName"); myClickLogger.logClickEvent(clickCode); }); }); The above ajax call(1.) is "canceled" by browser whenever the button click being tracked takes to a new page. If I change 'aysnc' to 'false', then the ajax call succeeds. Also, click events which do not take to a new page succeed. Only the click events taking to new page are being canceled. I do not want to make the call synchronous. Any ideas, what could be the issue? How can I guarantee that the asynchronous call before is finished when the click event takes to a new page?

    Read the article

  • hg archive to Remote Directory

    - by Brett Daniel
    Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following: hg archive ssh://[email protected]/path/to/archive However, that does not appear to work. It instead creates a directory called ssh: in the current directory. I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way. if [[ $# != 1 ]]; then echo "Usage: $0 [user@]hostname:remote_dir" exit fi arg=$1 arg=${arg%/} # remove trailing slash host=${arg%%:*} remote_dir=${arg##*:} # zip named to match lowest directory in $remote_dir zip=${remote_dir##*/}.zip # root of archive will match zip name hg archive -t zip $zip # make $remote_dir if it doesn't exist ssh $host mkdir --parents $remote_dir # copy zip over ssh into destination scp $zip $host:$remote_dir # unzip into containing directory (will prompt for overwrite) ssh $host unzip $remote_dir/$zip -d $remote_dir/.. # clean up zips ssh $host rm $remote_dir/$zip rm $zip Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

    Read the article

  • ManyToManyField "table exist" error on syncdb

    - by Derek Reynolds
    When I include a ModelToModelField to one of my models the following error is thrown. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/Library/Python/2.6/site-packages/django/core/management/commands/syncdb.py", line 93, in handle_noargs cursor.execute(statement) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) File "/Library/Python/2.6/site-packages/django/db/backends/mysql/base.py", line 84, in execute return self.cursor.execute(query, args) File "build/bdist.macosx-10.6-universal/egg/MySQLdb/cursors.py", line 173, in execute File "build/bdist.macosx-10.6-universal/egg/MySQLdb/connections.py", line 36, in defaulterrorhandler _mysql_exceptions.OperationalError: (1050, "Table 'orders_proof_approved_associations' already exists") Field definition: approved_associations = models.ManyToManyField(Association) Everything works fine when I remove the field, and the table is no where in site. Any thoughts as to why this would happen?

    Read the article

  • How to use SerialPort SerialDataReceived help

    - by devnet247
    Hi Not sure how to handle SerialPort DataReceived. Scenario I have an application that communicate with a device and this device returns a status .This happens in different stages EG public enum ActionState { Started, InProgress, Completed etc... } Now if I were to use the DataReceivedEventHandler how can I tell what Method is executing? eg Action1 or Action2 etc...? I also want to include some sort of timeout when getting back stuff from device. Any example or advice? public ActionState Action1 { serialPort.write(myData); string result=serialPort.ReadExisting()); //convertTo ActionState and return return ConvertToActionState(result); } public ActionState Action2 { serialPort.write(myData); string result=serialPort.ReadExisting()); //convertTo ActionState and return return ConvertToActionState(result); } private void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { //How can I use this to detect which method is firing and set the actionState Enum accordingly? } private ActionState(string result) { if(result==1) return ActionState.Started; else if (result==2) return ActionState.Completed etc... }

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • WCF Custom Delegation/Authentication without Kerberos

    - by MichaelGG
    I'm building a simple WCF service, probably exposed via HTTPS, using NTLM security. Since not all users are going to be capable of using the service directly, we're writing a simple web front-end for the service. Users will auth with HTML to the web front-end. What we want is a way to delegate the user of the web site all the way to the WCF service. I understand Kerberos delegation can do this, but that's not available to us. What I want to do is make the web front-end account a specially trusted account, so that if a request hits the WCF service authenticated as "DOMAIN\WebApp", we read a WCF message header containing the real identity, then switch the principal to that and continue as normal. Is there any "simple" way of achieving this? Should I give up entirely on this idea, and instead make users "sign-in" to the WCF app and then do complete custom auth? The WCF extensibility and security options seem so vast, I'd like to get a heads up on which path to start heading down.

    Read the article

  • python: nonblocking subprocess, check stdout

    - by Will Cavanagh
    Ok so the problem I'm trying to solve is this: I need to run a program with some flags set, check on its progress and report back to a server. So I need my script to avoid blocking while the program executes, but I also need to be able to read the output. Unfortunately, I don't think any of the methods available from Popen will read the output without blocking. I tried the following, which is a bit hack-y (are we allowed to read and write to the same file from two different objects?) import time import subprocess from subprocess import * with open("stdout.txt", "wb") as outf: with open("stderr.txt", "wb") as errf: command = ['Path\\To\\Program.exe', 'para', 'met', 'ers'] p = subprocess.Popen(command, stdout=outf, stderr=errf) isdone = False while not isdone : with open("stdout.txt", "rb") as readoutf: #this feels wrong for line in readoutf: print(line) print("waiting...\\r\\n") if(p.poll() != None) : done = True time.sleep(1) output = p.communicate()[0] print(output) Unfortunately, Popen doesn't seem to write to my file until after the command terminates. Does anyone know of a way to do this? I'm not dedicated to using python, but I do need to send POST requests to a server in the same script, so python seemed like an easier choice than, say, shell scripting. Thanks! Will

    Read the article

  • Symfony file upload - "Array" stored in database instead of the actual filename

    - by Guillaume Flandre
    I'm using Symfony 1.4.4 and Doctrine and I need to upload an image on the server. I've done that hundreds of times without any problem but this time something weird happens : instead of the filename being stored in the database, I find the string "Array". Here's what I'm doing: In my Form: $this->useFields(array('filename')); $this->embedI18n(sfConfig::get('app_cultures')); $this->widgetSchema['filename'] = new sfWidgetFormInputFileEditable(array( 'file_src' => '/uploads/flash/'.$this->getObject()->getFilename(), 'is_image' => true, 'edit_mode' => !$this->isNew(), 'template' => '<div id="">%file%</div><div id=""><h3 class="">change picture</h3>%input%</div>', )); $this->setValidator['filename'] = new sfValidatorFile(array( 'mime_types' => 'web_images', 'path' => sfConfig::get('sf_upload_dir').'/flash', )); In my action: public function executeIndex( sfWebRequest $request ) { $this->flashContents = $this->page->getFlashContents(); $flash = new FlashContent(); $this->flashForm = new FlashContentForm($flash); $this->processFlashContentForm($request, $this->flashForm); } protected function processFlashContentForm($request, $form) { if ( $form->isSubmitted( $request ) ) { $form->bind( $request->getParameter( $form->getName() ), $request->getFiles( $form->getName() ) ); if ( $form->isValid() ) { $form->save(); $this->getUser()->setFlash( 'notice', $form->isNew() ? 'Added.' : 'Updated.' ); $this->redirect( '@home' ); } } } Before binding my parameters, everything's fine, $request->getFiles($form->getName()) returns my files. But afterwards, $form->getValue('filename') returns the string "Array". Did it happen to any of you guys or do you see anything wrong with my code? Edit: I added the fact that I'm embedding another form, which may be the problem (see Form code above).

    Read the article

  • Azure application working on emulator but not on azure cloud

    - by Hisham Riaz
    firstly i am developing my MVC3 application on visual web developer 2010 express, by migrating my MVC3 (cshtml) files on MVC2. it works great on local system using the emulator, but once i deploy the application on azure it gives runtime errors. example: The layout page "~/Views/Shared/test_page.cshtml" could not be found at the following path: "~/Views/Shared/test_page.cshtml". Source Error: Line 8: //Layout = "~/Views/Shared/upload.cshtml"; Line 9: //Layout = "~/Views/Shared/_Layout2.cshtml"; Line 10: Layout = "~/Views/Shared/test_page.cshtml"; Line 11: } Line 12: else CODE IS AS FOLLOWS: _ViewStart.cshtml file @{ string AccId = Request.QueryString["AccId"].ToString(); if (AccId=="0") { //Layout = "~/Views/Shared/upload.cshtml"; //Layout = "~/Views/Shared/_Layout2.cshtml"; Layout = "~/Views/Shared/test_page.cshtml"; } else { string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath(AccId); Layout = LayOutPagePath; } } ......... how ever the page exist, and is working fine on azure emulator, but not in azure cloud. CODE FOR test_page.cshtml @{ var result = "1234567890"; var temp_xml = MVCTest.Models.ComponentClass.GetTemplateAndTheme("1");//returning xml string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath("1");//returning string } @RenderBody() test_page @temp_xml @result @LayOutPagePath

    Read the article

  • Common JNDI resources in Tomcat

    - by Lehane
    Hi, I’m running a couple of servlet applications in Tomcat (5.5). All of the servlets use a common factory resource that is shared out using JNDI. At the moment, I can get everything working by including the factory resource as a GlobalNamingResource in the /conf/server.xml file, and then having each servlet’s META-INF/context.xml file include a ResourceLink to the resource. Snippets from the XML files are included below. NOTE: I’m not that familiar with tomcat, so I’m not saying that this is a good configuration!!! However, I now want to be able install these servlets into multiple tomcat instances automatically using an RPM. The RPM will firstly copy the WARs to the webapps directory, and the jars for the factory into the common/lib directory (which is fine). But it will also need to make sure that the factory resource is included as a resource for all of the servlets. What is the best way add the resource globally? I’m not too keen on writing a script that goes into the server.xml file and adds in the resource that way. Is there any way for me to add in multiple server.xml files so that I can write a new server-app.xml file and it will concatenate my settings to server.xml? Or, better still to add this JNDI resource to all the servlets without using server.xml at all? p.s. Restarting the server will not be an issue, so I don’t mind if the changes don’t get picked up automatically. Thanks Snippet from server.xml <!-- Global JNDI resources --> <GlobalNamingResources> <Resource name="bean/MyFactory" auth="Container" type="com.somewhere.Connection" factory="com.somewhere.MyFactory"/> </GlobalNamingResources> The entire servlet’s META-INF/context.xml file <?xml version="1.0" encoding="UTF-8"?> <Context> <ResourceLink global="bean/MyFactory" name="bean/MyFactory" type="com.somewhere.MyFactory"/> </Context>

    Read the article

  • Publish Maven artifacts on FTP with Hudson FTP Publisher Plugin

    - by jaguard
    I'm building a number of artefacts (zip files for different environments: test, dev) using the maven-assembly-plugin using a specialized Maven profile. These artefacts I want to copy/collect on on a FTP server keeping the version (01.07.10.16.Wed-1626) as a folder, so I need to copy from test/build/01.07.10.16.Wed-1626/ to ftp://my-ftp-server:21/projects/myserver-1.7/01.07.10.16.Wed-1626/ The layout for the Maven output is this: target/ build/ 01.07.10.16.Wed-1626/ my-server-01.07.10.16.Wed-1626-dev.zip my-server-01.07.10.16.Wed-1626-test.zip For copying the artefacts I'm using FTP Publisher Plugin but it seams I miss something since that even the build is OK and the artefacts are build without problem but the job is finishing without copying the artefacts, and in the console there is no log info about copying the artefacts My FTP publisher config (FTP repository hosts) is: Hostname: my-ftp-server Port: 21 Timeout: 10000 Root Repository Path: projects User Name: my-user Password: my-pass My Hudson job FTP publisher config (Publish artifacts to FTP) is: FTP site: my-ftp-server Files to upload Source: target/build/** Destination: myserver-1.7 1: There is any log to check if there are any FTP copy errors ? 2: There is any problem with the file pattern (source) or with the dest ?

    Read the article

  • Failure to register .dll with regsvr32 - only in Release build.

    - by Hendrik
    Hi, I'm having a weird problem when trying to register the .dll i created using regsvr32. During development everything went fine, the debug version registers and works fine. Now i wanted to create a Release version, but that Version does not register anymore. regsvr32 comes up with the following error: The module "mpegsplitter.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified procedure could not be found. Some research brought me to the dependency walker, which does tell me this Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. It also does show a dependency on "crtdll.dll" that the debug version does not have (The function view shows soem functions that normally should be in ole32.dll), which is colored red'ish. So far so good, i guess its somehow related to what the dependency walker shows there. But where do i go from here? How do i fix it? Any help would be greatly appreciated, that has been keeping me busy for several hours already. Thanks!

    Read the article

  • Append a dynamically changing watermark to a PDF in SharePoint

    - by ccomet
    This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is. Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change. One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose. What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process? Thanks in advance!

    Read the article

  • Problems with jQuery load and getJSON only when using Chrome

    - by leftend
    I'm having an issue with two jQuery calls. The first is a "load" that retrieves HTML and displays it on the page (it does include some Javascript and CSS in the code that is returned). The second is a "getJSON" that returns JSON - the JSON returned is valid. Everything works fine in every other browser I've tried - except Chrome for either Windows or Mac. The page in question is here: http://urbanistguide.com/category/Contemporary.aspx When you click on a Restaurant name in IE/FF, you should see that item expand with more info - and a map displayed to the right. However, if you do this in Chrome all you get is the JSON data printed to the screen. The first problem spot is when the "load" function is called here: var fulllisting = top.find(".listingfull"); fulllisting.load(href2, function() { fulllisting.append("<div style=\"width:99%;margin-top:10px;text-align:right;\"><a href=\"#\" class=\"" + obj.attr("id") + "\">X</a>"); itemId = fulllisting.find("a.listinglink").attr("id"); ... In the above code, the callback function doesn't seem to get invoked. The second problem spot is when the "getJSON" function is called: $.getJSON(href, function(data) { if (data.error.length > 0) { //display error message } else { ... } In this case - it just seems to follow the link instead of performing the callback... and yes, I am doing a "return false;" at the end of all of this to prevent the link from executing. All of the rest of the code is inline on that page if you want to view the source code. Any ideas?? Thanks

    Read the article

  • Exporting static data in a DLL

    - by Gayan
    I have a DLL which contains a class with static members. I use __declspec(dllexport) in order to make use of this class's methods. But when I link it to another project and try to compile it, I get "unresolved external symbol" errors for the static data. e.g. In DLL, Test.h class __declspec(dllexport) Test{ protected: static int d; public: static void m(){} } In DLL, Test.cpp #include "Test.h" int Test::d; In the application which uses Test, I call m(). I also tried using __declspec(dllexport) for each method separately but I still get the same link errors for the static members. If I check the DLL (the .lib) using dumpbin, I could see that the symbols have been exported. For instance, the app gives the following error at link time: 1>Main.obj : error LNK2001: unresolved external symbol "protected: static int CalcEngine::i_MatrixRow" (?i_MatrixRow@CalcEngine@@1HA) But the dumpbin of the .lib contains: Version : 0 Machine : 14C (x86) TimeDateStamp: 4BA3611A Fri Mar 19 17:03:46 2010 SizeOfData : 0000002C DLL name : CalcEngine.dll Symbol name : ?i_MatrixRow@CalcEngine@@1HA (protected: static int CalcEngine::i_MatrixRow) Type : data Name type : name Hint : 31 Name : ?i_MatrixRow@CalcEngine@@1HA I can't figure out how to solve this. What am I doing wrong? How can I get over these errors? P.S. The code was originally developed for Linux and the .so/binary combination works without a problem

    Read the article

< Previous Page | 877 878 879 880 881 882 883 884 885 886 887 888  | Next Page >