Search Results

Search found 9934 results on 398 pages for 'iis logs'.

Page 351/398 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Sending basic authentication information via form

    - by VolatileStorm
    I am working on a site that currently uses a basic authentication dialog box login system, that is the type of dialog that you get if you go here: http://www.dur.ac.uk/vm.boatclub/password/index.php I did not set this system up and am not in a position to easily/quickly work around it, but it DOES work. The issue however is that the dialog box is not very helpful in telling you what login information you have to use (that is which username and password combination), and so I would like to replace it with a form. I had been thinking that this wasn't possible but I wanted to ask in order to find out. Is it possible to set up an HTML form that sends the data to the server such that it accepts it in the same way that it would using this dialog box? Alternatively is it possible to set up a PHP script that would take normal form data and process it somehow passing it to the server such that it logs in? Edit: After being told that this is basic authentication I went around and have managed to find a way that works and keeps the user persistently logged in. However, this does not work in internet explorer. The solution was simply to redirect the user to: http://username:[email protected]/vm.boatclub/password/index.php But Internet Explorer removed it due to phishing uses about 3 years ago. Is there a way to use javascript to get the browser to access the site in this way? Or will I have to simply change my UI?

    Read the article

  • Problems with ASP.NET, machine-level web.config, and the location element

    - by Daniel Schaffer
    I've got a server running Windows Web Server 2008 R2. The machine-level web.config has the following entries: <location path="Preview"> <appSettings> <add key="Environment" value="Preview" /> </appSettings> </location> <location path="Staging"> <appSettings> <add key="Environment" value="Staging" /> </appSettings> </location> <location path="Production"> <appSettings> <add key="Environment" value="Production" /> </appSettings> </location> I have a website that I'd set up in the direction D:\Sites\Preview\, so the full path would be D:\Sites\Preview\WebSite1. If I put a simple aspx file that just outputs the value of ConfigurationManager.AppSettings["Environment"], it displays the value Preview. I'm not clear on exactly how that works, but it does. I'd set this up several weeks ago, and just now tried to duplicate this - I put a second site in the D:\Sites\Preview\ directory, expecting that it would automatically pick up the appropriate appSettings entries, but for some reason it hasn't - the same aspx page doesn't show anything. Additionally, when I go into the IIS manager and open the Configuration Editor, there are no settings in there, whereas there are settings listed for the first site. Any ideas as to what I could be missing? Is the location element intended to work like this, or did I just find some magical fluke with my first site?

    Read the article

  • TeamCity output artifacts not published to IIS7 folder

    - by clausas
    I am trying to set up TeamCity to build and deploy an ASP.NET MVC application. I have the setup running successfully on other servers using TeamCity 4.5, but the new server is running TeamCity 6, and I am having trouble getting it to work as expected. TeamCity manages to get the files from source control, and the project (Visual Studio Solution 2008 set to "Build") builds and outputs the necessary files as expected. The problem seems to be with my artifact paths, as the output files are not copied to the website folder. My solution consists of dozen projects, of which the "Web" project is the interesting one in this case. The build checkout directory is C:\TeamCity\buildAgent\work\7da320cebf0ee541, and the "Web"-project is found in C:\TeamCity\buildAgent\work\7da320cebf0ee541\Web I have set up my build configuration with the following artifact paths (relative from checkout directory to the folder containing the website): Web/bin=>../../../../inetpub/wwwroot/staging/bin Web/Content=>../../../../inetpub/wwwroot/staging/Content Web/Views=>../../../../inetpub/wwwroot/staging/Views Web/Media=>../../../../inetpub/wwwroot/staging/Media Web/*.aspx=>../../../../inetpub/wwwroot/staging Web/*.asax=>../../../../inetpub/wwwroot/staging (I've tried with more ../ just in case, but it didn't make a difference). This is the output I get from the log [19:35:29]: Publishing artifacts (1s) [19:35:29]: [Publishing artifacts] Paths to publish: [Web/bin=../../../../inetpub/wwwroot/staging/bin, Web/Content=../../../../inetpub/wwwroot/staging/Content, Web/obj=../../../../inetpub/wwwroot/staging/obj, Web/Views=../../../../inetpub/wwwroot/staging/Views, Web/Media=../../../../inetpub/wwwroot/staging/Media, Web/.aspx=../../../../inetpub/wwwroot/staging, Web/.asax=../../../../inetpub/wwwroot/staging, teamcity-info.xml] [19:35:30]: [Publishing artifacts] Sending files [19:35:32]: Build finished Logs from some of the other servers running TeamCity 4.5 uses a different format, with a line for each of the artifacts being published, I'm not sure if this is relevant or only due to a different logging format. Everything seems to be working, but no files are put in my website folder after a build, am I missing something here? Any help will be much appreciated :)

    Read the article

  • ASP.NET MVC View ReRenders Part of Itself

    - by Jason
    In all my years of .NET programming I have not run across a bug as weird as this one. I discovered the problem because some elements on the page were getting double-bound by jQuery. After some (ridiculous) debugging, I finally discovered that once the view is completely done rendering itself and all its children partial views, it goes back to an arbitrary yet consistent location and re-renders itself. I have been pulling my hair out about this for two days now and I simply cannot get it to render itself only once! For lack of any better debugging idea, I've painstakingly added logging tracers throughout the HTML just so I can pin down what may be causing this. For instance, this code ($log just logs to the console): ... <script type="text/javascript">var x = 0; $log('1');</script> <div id="new-ad-form"> <script type="text/javascript">x++;$log('1.5', x);</script> ... will yield ... <--- this happens before this snippet 1 1.5 1 ... 10 <--- bottom of my form, after snippet 1.5 2 <--- beginning of part that runs again! ... 9 <--- this happens after this snippet I've searched my codebase high and low, but there is NOTHING that says that it should re-render part of a page. I'm wondering if the jQueryUI has anything to do with it, as #new-ad-form is the container for a jQueryUI dialog box. If this is potentially the case, here's my init code for that: $('#new-ad-form').dialog({ autoOpen: false, modal: true, width: 470, title: 'Create A New Ad', position: ['center', 35], close: AdEditor.reset });

    Read the article

  • Sys.WebForms.PageRequestManagerServerErrorException: .... The status code returned from the server w

    - by webnoob
    Hi All, I have seen a few posts regarding this issue but not one specific to my problem and I have no ideas as to what I need to do to debug this. I have some combo boxes on an aspx pages, when I select a value from the first one, it fills the second with value and so on with the third and fourth. This works with no problems until I wrap an asp.net UpdatePanel around the combo boxes and try to "ajaxify" the whole process so the page isn't dancing around. The exact error I get is: Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 404 Some things to note: I am using URL rewriting - This is what I think is causing the problem The error will occur whenever I choose a selection for a SECOND time. This means that I could select a value from the first combo box and get the same error (so it is happening on the second postback - No matter which combo box it's from). I have tried setting the EnablePartialRendering="false" on teh scriptmanager but as I said, it works when not using ajax, so I don't know how to debug the issue. My server is Windows 2008 running IIS& with ASP.NET 2.0. I would really appreciate your help Thanks in advance.

    Read the article

  • The worker processcalls OpenSubKey but returns null by accessing Remote Registry service.

    - by Cary
    My web server is deployed in IIS 6. The web server starts the Remote Registry service in the remote machine successfully by creating a process to run some remote operation commands. This first line runs successfully. But the second line returns null. #1 RegistryKey remoteRegKey = RegistryKey.OpenRemoteBaseKey(RegistryHive.LocalMachine, "139.24.185.27"); #2 RegistryKey targetKey = remoteRegKey.OpenSubKey(@"SOFTWARE\Wow6432Node\XXXX\XXXX\Config\Modality", true); I tried to find the reason from MSDN. It tells only one case it would return null. The case is when the subkey does not exist. If it has not enough permission, it will throw exception. But the subkey really exists. I change another machine to debug my code with Visual Studio 2008. It can run two lines successfully. If it has enough permission, it should not only can open the LocalMachine, but also can open any of its subkeys. I am quite confusing about this.

    Read the article

  • nginx error: (99: Cannot assign requested address)

    - by k-g-f
    I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server: $ sudo /etc/init.d/nginx start I get the following error: Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address) where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2. My nginx.conf file looks like this: user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 3; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } and my /usr/local/nginx/sites-enabled/example.com looks like: server { listen IP:80; server_name example.com; rewrite ^/(.*) https://example.com/$1 permanent; } server { listen IP:443 default ssl; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; server_name example.com; access_log /home/example/example.com/log/access.log; error_log /home/example/example.com/log/error.log; }

    Read the article

  • Struts2 Sitemesh not working in WAS 6 server

    - by Jithin
    I have a struts2-spring application which works fine in jetty server but when i try migrating it to WAS 6 the decorator(sitemesh) is not getting applied. The server logs shows no error.Is this a known issue ? my web.xml looks like this <filter> <filter-name>action2-cleanup</filter-name> <filter-class>org.apache.struts2.dispatcher.ActionContextCleanUp</filter-class> </filter> <filter> <filter-name>sitemesh</filter-name> <filter-class>com.opensymphony.module.sitemesh.filter.PageFilter</filter-class> </filter> <filter> <filter-name>action2</filter-name> <filter-class>org.apache.struts2.dispatcher.FilterDispatcher</filter-class> </filter> <filter-mapping> <filter-name>action2</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>action2-cleanup</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>sitemesh</filter-name> <url-pattern>/*</url-pattern> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> <dispatcher>REQUEST</dispatcher> </filter-mapping>

    Read the article

  • Cookie not renewing/overwriting in IE

    - by deceze
    I have a weird quirk with cookies in IE. When a user logs into the site, I'm generating a new session id and hence need to overwrite the cookie. The flow is basically: Client goes to https://secure.example.com/users/login page, automatically receiving a session id Client POSTs login credentials to same address Client receives the following headers together with a 302 redirect to https://secure.example.com/users/mypage: CAKEPHP=deleted; expires=Sun, 05-Apr-2009 04:50:35 GMT; path=/ CAKEPHP=98hnIO23...; expires=Mon, 12 Apr 2010 04:50:36 GMT; path=/; secure Client is supposed to visit https://secure.example.com/users/mypage, presenting the new session id. This works in all browsers, except IE (tested in 7 & 8). IE retains the old, unauthenticated session id, and is redirected back to the login page. It works on my local test environment (using a self-signed certificate at https://localhost:8443/...), but not on the live server. I'm using CakePHP and simply issue a $this->Session->renew(), which produces the above cookie headers. Any ideas how to get IE to accept the new cookie?

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • Asp.Net Login Control very slow initial connection to Non-Trusted AD Domain

    - by Eric Brown - Cal
    ASP.NET Login control is very slow making the initial connection to AD when authenticating to a different domain than the domain the web server is a member of. Problem occurs for the IIS server and when using with the Visual Studio's built in web server. It takes about 30 seconds the first time when attempting to use the control to connect against another domain. There is no trust relationship bewteen the web server's domain and the other domains (attempted connecting to several different domains). Subsequent connections execute quickly until the connection times out. Using Systernals Process Monitor to troubleshoot, there are two OpenQuery operations right before the delay to "C:\WINDOWS\asembly\GAC_MSIL\System.DirectoryServices\2.0.0.0_b03f5f7f11d50a3a\Netapi32.dll with a result NAME NOT FOUND" and right after the 30 second delay the TCP Send and TCP Recieves indicate communication begins with the AD server. Things we have tried: Impersonating an administrator on the web server in the web.config; Granting permissions to the CryptoKeys to the NetworkService and ASPNET; Specifying by IP instead of DNS name; Multiple variations of specifying the name and ldap server with domains and OU's; Local host entries; Looked for ports being blocked (SYN_SENT) with netstat -an. Nslookup resolves all the domains and systems involved correectly. TraceRt shows the Correct routes Any Idea or hints are greately appreicated.

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Could Python's logging SMTP Handler be freezing my thread for 2 minutes?

    - by Oddthinking
    A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT: I actually counted. A little under 4000 short emails in two hours. So far more than I suggested, sorry. But enough to over-fill a Sendmail's buffers?

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • ASMX Web Service online works when all of the code is in one file without code-behind

    - by Ben McCormack
    I have an ASMX Web Service that has its code entirely in a code-behind file, so that the entire contents of the .asmx file is: <%@ WebService Language="C#" CodeBehind="~/App_Code/AddressValidation.cs" Class="AddressValidation" %> On my test machine (Windows XP with IIS 5), I set up a virtual directory just for this ASP.NET 2.0 solution and everything works great. All my code is separated nicely and it just works. However, when we deployed this solution to our Windows Server 2003 development environment, we noticed that the code only compiled when all of the code was dropped directly into the .asmx file, meaning that the solution didn't work with code-behind. We can't figure out why this is happening. One thing that's different about our setup in our development environment is that instead of creating a separate virual directory just for this solution, we dropped it into an existing directory that runs a classic ASP application. So here we have a folder with an ASP.NET 2.0 application within a directory that contains a classic ASP application. Granted, everything in the ASP.NET 2.0 application works if all of the code is within the .asmx file and not in code-behind, but we'd really like to know why it's not recognizing the code-behind files and compiling it correctly.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • How to use log4j for a axis2 webservice

    - by KItis
    I have created a simple axis2 webservice to understand logging functionality in a webservice. in this sample webservice, when a client calls this webservice, first it reads the log4j.property file. then i can see the logs being printed on console. but i have included file appender too into the property file. but i can not file the log file any where in my sample webapplication. i am using tomcat to run the webservice. following is webservice interface method called by client. package test; import java.io.IOException; import java.io.InputStream; import java.util.Properties; import org.apache.log4j.BasicConfigurator; import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; public class WSInterface { static final Logger logger = Logger.getLogger(WSInterface.class); public String log4jTest(String input){ InputStream inputStream = this.getClass().getClassLoader() .getResourceAsStream(input); Properties properties = new Properties(); System.out.println("InputStream is: " + inputStream); //load the inputStream using the Properties try { properties.load(inputStream); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } PropertyConfigurator.configure(properties); logger.debug("Hello World!"); Class myclass = WSInterface.class; String url = myclass.getResource("WSInterface.class").toString(); String output = ""; return url; } }

    Read the article

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish Error

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Passing a URL as a URL parameter

    - by Andrea
    I am implementing OpenId login in a CakePHP application. At a certain point, I need to redirect to another action, while preserving the information about the OpenId identity, which is itself a URL (with GET parameters), for instance https://www.google.com/accounts/o8/id?id=31g2iy321i3y1idh43q7tyYgdsjhd863Es How do I pass this data? The first attempt would be function openid() { ... $this->redirect(array('controller' => 'users', 'action' => 'openid_create', $openid)); } but the obvious problem is that this completely messes up the way CakePHP parses URL parameters. I'd need to do either of the following: 1) encode the URL in a CakePHP friendly manner for passing it, and decoding it after that, or 2) pass the URL as a POST parameter but I don't know how to do this. EDIT: In response to comments, I should be more clear. I am using the OpenId component, and I have a working OpenId implementation. What I need to do is to link OpenId with an existing user system. When a new user logs in via OpenId, I ask for more details, and then create a new user with this data. The problem is that I have to keep the OpenId URL throughout this process.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Read attributes of MSBuild custom tasks via events in the Logger

    - by gt
    I am trying to write a MSBuild logger module which logs information when receiving TaskStarted events about the Task and its parameters. The build is run with the command: MSBuild.exe /logger:MyLogger.dll build.xml Within the build.xml is a sequence of tasks, most of which have been custom written to compile a (C++ or C#) solution, and are accessed with the following custom Task: <DoCompile Desc="Building MyProject 1" Param1="$(Param1Value)" /> <DoCompile Desc="Building MyProject 2" Param1="$(Param1Value)" /> <!-- etc --> The custom build task DoCompile is defined as: public class DoCompile : Microsoft.Build.Utilities.Task { [Required] public string Description { set { _description = value; } } // ... more code here ... } Whilst the build is running, as each task starts, the logger module receives IEventSource.TaskStarted events, subscribed to as follows: public class MyLogger : Microsoft.Build.Utilities.Logger { public override void Initialize(Microsoft.Build.Framework.IEventSource eventSource) { eventSource.TaskStarted += taskStarted; } private void taskStarted(object sender, Microsoft.Build.Framework.TaskStartedEventArgs e) { // write e.TaskName, attributes and e.Timestamp to log file } } The problem I have is that in the taskStarted() method above, I want to be able to access the attributes of the task for which the event was fired. I only have access to the logger code and cannot change either the build.xml or the custom build tasks. Can anyone suggest a way I can do this?

    Read the article

  • ASP.NET MVC routing issue with Google Chrome client

    - by synergetic
    My Silverlight 4 app is hosted in ASP.NET MVC 2 web application. It works fine when I browse with Internet Explorer 8. However Google Chrome (version 5) cannot find ASP.NET controllers. Specifically, the following ASP.NET controller works both with Chrome and IE. //[OutputCache(NoStore = true, Duration = 0, VaryByParam = "None")] public ContentResult TestMe() { ContentResult result = new ContentResult(); XElement response = new XElement("SvrResponse", new XElement("Data", "my data")); result.Content = response.ToString(); return result; } If I uncomment [OutputCache] attribute then it works with IE but not with Chrome. Also, I use custom model binding with controllers, so if I write the following: public ContentResult TestMe(UserContext userContext) { ... } it also works with IE, but again not with Chrome which gives me error message saying that resource was not found. Of course, I configured IIS 6 for handling all requests via aspnet_isapi.dll and I have registered custom model binder in my web app's Global.asax inside Application_Start() method. Can someone explain me what might be the cause? Thank you.

    Read the article

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >