Search Results

Search found 21998 results on 880 pages for 'custom msbuild task'.

Page 368/880 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • django dynamically deduce SITE_ID according to the domain

    - by dcrodjer
    I am trying to develop a site which will render multiple customized sites according to the domain name (subdomain to be more precise). My all the domain names are redirected to the So for each site there will be a corresponding model which defines how the site should look (SITE - SITE_SETTINGS) What will be the best way to utilize the django sites framework to get the SITE_ID of the current site from the domain name instead of hard-coding it in the settings files (django sites documentation) and run database queries, render the views accordingly? If using multiple settings file is my only option can this (wsgi script handle domain name) be done? Update So finally, following lukes answer, what I will do is define a custom middleware which makes the views available with the important vars required according to the domain. And as far as sitemaps and comments is concerned, I will have to customize sitemaps app and a custom sites model on which the other models of sites will be based. And since the comments system is based on the hard-coded sitemap ID I can use it just as is on the models (models will already be filtered according to the site based on my sites framework) though the permalink feature will have to be customized. So a lot of customization. Please suggest if I am going wrong anywhere in this because I have to ensure that the features of the project are optimized. Thanks!

    Read the article

  • Zend_Auth and database SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • High precision event timer

    - by rahul jv
    #include "target.h" #include "xcp.h" #include "LocatedVars.h" #include "osek.h" /** * This task is activated every 10ms. */ long OSTICKDURATION; TASK( Task10ms ) { void XCP_FN_TYPE Xcp_CmdProcessor( void ); uint32 startTime = GetQueryPerformanceCounter(); /* Trigger DAQ for the 10ms XCP raster. */ if( XCPEVENT_DAQ_OVERLOAD & Xcp_DoDaqForEvent_10msRstr() ) { ++numDaqOverload10ms; } /* Update those variables which are modified every 10ms. */ counter16 += slope16; /* Trigger STIM for the 10ms XCP raster. */ if( enableBypass10ms ) { if( XCPEVENT_MISSING_DTO & Xcp_DoStimForEvent_10msRstr() ) { ++numMissingDto10ms; } } duration10ms = (uint32)( ( GetQueryPerformanceCounter() - startTime ) / STOPWATCH_TICKS_PER_US ); } What would be the easiest (and/or best) way to synchronise to some accurate clock to call a function at a specific time interval, with little jitter during normal circumstances, from C++? I am working on WINDOWS operating system now. The above code is for RTAS OSEK but I want to call a function at a specific time interval for windows operating system. Could anyone assist me in c++ language ??

    Read the article

  • What is the best practice for including third party jar files in a Java program?

    - by ZoFreX
    I have a program that needs several third-party libraries, and at the moment it is packaged like so: zerobot.jar (my file) libs/pircbot.jar libs/mysql-connector-java-5.1.10-bin.jar libs/c3p0-0.9.1.2.jar As far as I know the "best" way to handle third-party libs is to put them on the classpath in the manifest of my jar file, which will work cross-platform, won't slow down launch (which bundling them might) and doesn't run into legal issues (which repackaging might). The problem is for users who supply the third party libraries themselves (example use case, upgrading them to fix a bug). Two of the libraries have the version number in the file, which adds hassle. My current solution is that my program has a bootstrapping process which makes a new classloader and instantiates the program proper using it. This custom classloader adds all .jar files in lib/ to its classpath. My current way works fine, but I now have two custom classloaders in my application and a recent change to the code has caused issues that are difficult to debug, so if there is a better way I'd like to remove this complexity. It also seems like over-engineering for what I'm sure is a very common situation. So my question is, how should I be doing this?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • NSArray containObjects method

    - by Anthony Chan
    Hi, I have a simple question regarding xcode coding but don't know why things are not performing as I think. I have an array of objects (custom objects). I just want to check if this one is within the array. I used the following code: NSArray *collection = [[NSArray alloc] initWithObjects:A, B, C, nil]; //custom "Item" objects Item *tempItem = [[Fruit alloc] initWithLength:1 width:2 height:3]; //3 instance variables in "Item" objects if([collection containsObject:tempItem]) { NSLog(@"collection contains this item"); } I suppose the above checking will give me a positive result but it's not. Further, I checked whether the objects created are the same. NSLog(@"L:%i W:%i H:%i", itemToCheck.length, itemToCheck.width, itemToCheck.height); for (int i = 0, i < [collection count], i++) { Item *itemInArray = [collection objectAtIndex:i]; NSLog(@"collection contains L:%i W:%i H:%i", itemInArray.length, itemInArray.width, itemInArrayheight); } In the console, this is what I got: L:1 W:2 H:3 collection contains L:0 W:0 H:0 collection contains L:1 W:2 H:3 collection contains L:6 W:8 H:2 Obviously the tempItem is inside the collection array but nothing shows up when I use containsObject: to check it. Could anyone give me some direction which part I am wrong? Thanks a lot!

    Read the article

  • Communicating with all network computers regardless of IP address

    - by Stephen Jennings
    I'm interested in finding a way to enumerate all accessible devices on the local network, regardless of their IP address. For example, in a 192.168.1.X network, if there is a computer with a 10.0.0.X IP address plugged into the network, I want to be able to detect that rogue computer and preferrably communicate with it as well. Both computers will be running this custom software. I realize that's a vague description, and a full solution to the problem would be lengthy, so I'm really looking for help finding the right direction to go in ("Look into using class XYZ and ABC in this manner") rather than a full implementation. The reason I want this is that our company ships imaged computers to thousands of customers, each of which have different network settings (most use the same IP scheme, but a large percentage do not, and most do not have DHCP enabled on their networks). Once the hardware arrives, we have a hard time getting it up on the network, especially if the IP scheme doesn't match, since there is no one technically oriented on-site. Ideally, I want to design some kind of console to be used from their main workstation which looks out on the network, finds all computers running our software, displays their current IP address, and allows you to change the IP. I know it's possible to do this because we sell a couple pieces of custom hardware which have exactly this capability (plug the hardware in anywhere and view it from another computer regardless of IP), but I'm hoping it's possible to do in .NET 2.0, but I'm open to using .NET 3.5 or P/Invoke if I have to.

    Read the article

  • Error occurs while validating form input using jQuery in Firebug

    - by Param-Ganak
    I have written a custom validation code in jQuery, which is working fine. I have a login form which has two fields, i.e. userid and password. I have written a custom code for client side validation for these fields. This code is working fine and gives me proper error messages as per the situation. But the problem with this code is that when I enter the invalid data in any or both field and press submit button of form then it displays the proper error message but at the same time when I checked it in Firebug it displays following error message when submit button of the form is clicked validate is not defined function onclick(event) { javascript: return validate(); } (click clientX=473, clientY=273) Here is the JQUERY validation code $(document).ready(function (){ $("#id_login_form").validate({ rules: { userid: { required: true, minlength: 6, maxlength: 20, // basic: true }, password: { required: true, minlength: 6, maxlength: 15, // basic: true } }, messages: { userid: { required: " Please enter the username.", minlength: "User Name should be minimum 6 characters long.", maxlength: "User Name should be maximum 15 characters long.", // basic: "working here" }, password: { required: " Please enter the password.", minlength: "Password should be minimum 6 characters long.", maxlength: "Password should be maximum 15 characters long.", // basic: "working here too.", } }, errorClass: "errortext", errorLabelContainer: "#messagebox" } }); }); /* $.validator.addMethod('username_alphanum', function (value) { return /^(?![0-9]+$)[a-zA-Z 0-9_.]+$/.test(value); }, 'User name should be alphabetic or Alphanumeric and may contain . and _.'); $.validator.addMethod('alphanum', function (value) { return /^(?![a-zA-Z]+$)(?![0-9]+$)[a-zA-Z 0-9]+$/.test(value); }, 'Password should be Alphanumeric.'); $.validator.addMethod('basic', function (value) { return /^[a-zA-Z 0-9_.]+$/.test(value); }, 'working working working'); */ So please tell me where is I am wrong in my jQuery code. Thank You!

    Read the article

  • ASP.Net Session Storage provider in 3-layer architecture

    - by Tedd Hansen
    I'm implementing a custom session storage provider in ASP.Net. We have a strict 3-layer architecture and therefore the session storage needs to go through the business layer. Presentation-Business-Database. The business layer is accessed through WPF. The database is MSSQL. What I need is (in order of preference): A commercial/free/open source product that solves this. The source code of a SqlSessionStateStore (custom session store) (not the ODBC-sample on MSDN) that I can modify to use a middle layer. I've tried looking at .Net source through Reflector, but the code is not usable. Note: I understand how to do this. I am looking for working samples, preferably that has been proven to work fine under heavy load. The ODBC sample on MSDN doesn't use the (new?) stored procs that the build in SqlSessionStateStore uses - I'd like to use these if possible (decreases traffic). Edit1: To answer Simons question on more info: ASP.Net Session()-object can be stored in either InProc, ASP.Net State Service or SQL-server. In a secure 3-layer model the presentation layer (web server) does not have direct/physical access to the database layer (SQL-server). And even without the physical limitations, from an architectural standpoint you may not want this. InProc and ASP.Net State Service does not support load balancing and doesn't have fault tolerance. Therefore the only option is to access SQL through webservice middle layer (business layer).

    Read the article

  • Jetty 7 will not allow me to customize a session cookie path

    - by Bob Obringer
    Using Jetty 7.0.2, I am unable to set a custom session cookie path. I am hosting multiple sites on the same server using apache to proxy requests to the proper context. (replaced http as htp as stackoverflow thinks my multiple links might be spam) <VirtualHost *:80> ServerName context.domain.com ProxyRequests On ProxyPreserveHost Off <Proxy *:80> Order deny,allow Allow from 127.0.0.1 </Proxy> ProxyPass / htp://localhost:8080/context/ ProxyPassReverse / htp://localhost:8080/context/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> Jetty is running on the same server on port 8080 and my context is available @ /context The user accesses the application @ htp://context.domain.com but jetty is setting the path for the session cookie @ /context. This prevents the browser from accessing the cookie since the the actual path to the context is not being used. I need to override Jetty's default setting to set the cookie for the context, and set the path at the root ( / ). In my Jetty's webdefault.xml I have the following, which is partially working: <context-param> <param-name>org.eclipse.jetty.servlet.SessionCookie</param-name> <param-value>CustomCookieName</param-value> </context-param> <context-param> <param-name>org.eclipse.jetty.servlet.SessionPath</param-name> <param-value>/</param-value> </context-param> The cookie is properly set with a custom name, but it is NOT setting the SessionPath. No matter what I set the value to... it refuses to set a cookie at any path but /context. This has been driving me crazy so any help would be greatly appreciated.

    Read the article

  • XSD, restrictions and code generation

    - by bob
    Hello, I'm working on some code generation for an existing project and I want to start from a xsd. So I can use tools as Xsd2Code / xsd.exe to generate the code and also the use the xsd to validate the xml. That part works without any problems. I also want to translate some of the restrictions to DataAnnotations (enrich Xsd2Code). For example xs:minInclusive / xs:maxInclusive I can translate to a RangeAttribute. But what to do with custom validation attributes that we created? Can I add custom facets / restrictions? And how? Or is there another solution / best practice. I would like to collect everything in a single (xsd) file so that one file contains the structure of the class (model) including the validation (attributes) that has to be added. <xs:element name="CertainValue"> <xs:simpleType> <xs:restriction base="xs:double"> <xs:minInclusive value="1" /> <xs:maxInclusive value="100" /> <xs_custom:customRule attribute="value" /> </xs:restriction> </xs:simpleType> </xs:element>

    Read the article

  • Host WCF in MVC2 Site

    - by Basiclife
    Hi, We've got a very large, complex MVC2 website. We want to add an API for some internal tools and decided to use WCF. Ideally, we want MVC itself to host the WCF service. Reasons include: Although there's multiple tiers to the application, some functionality we'd like in the API requires the website itself (e.g. formatting emails). We use TFS to auto-build (continuous integration) and deploy - The less we need to modify the build and release mechanism the better We use the Unity container and Inversion of Control throughout the application. Being part of the Website would allow us to re-use configuration classes and other helper methods. I've written a custom ServiceBehavior which in turn has a custom InstanceProvider - This allows me to instantiate and configure a container which is then used to service all requests for class instances from WCF. So my question is; Is it possible to host a WCF service from within MVC itself? I've only had experience in Services / Standard Asp.Net websites before and didn't realise MVC2 might be different until I actually tried to wire it into the config and nothing happened. After some googling, there don't seem to be many references to doing this - so thought I'd ask here.

    Read the article

  • YQL + PHP : how to make a facebook login

    - by Jonathan
    Hi! I was reading some stuff about the YQL api that Yahoo! has provided, I am not sure, but it appears to be a collection of lots of third party api into one common language, right? what I don't get is how to make the facebook login through it so I can get the user profile data... My project is to add a facebook(and other social networks) form login, because the website won't have his own login, people will have to use a social network to link in. Then I thought the YQL would help me out with this task so I wouldn't have to develop lots of functions to each one of the networks. Reading this http://developer.yahoo.com/yql/guide/yql-code-examples.html#sdk_yql, I understood how to make a Yahoo login so I can access some private data, but couldn't find how I could do it with facebook and others So my question... Can YQL help me with this? Can you give me a simple example of a facebook session using it within PHP? Are there alternatives to aid me in this task? thanks, Jonathan

    Read the article

  • Is it possible to navigate to the parent node of a matched node during XSLT processing?

    - by Darin
    I'm working with an OpenXML document, processing the main document part with some XSLT. I've selected a set of nodes via <xsl:template match="w:sdt"> </xsl:template> In most cases, I simply need to replace that matched node with something else, and that works fine. BUT, in some cases, I need to replace not the w:sdt node that matched, but the closest w:p ancestor node (ie the first paragraph node that contains the sdt node). The trick is that the condition used to decide one or the other is based on data derived from the attributes of the sdt node, so I can't use a typical xslt xpath filter. I'm trying to do something like this <xsl:template match="w:sdt"> <xsl:choose> <xsl:when test={first condition}> {apply whatever templating is necessary} </xsl:when> <xsl:when test={exception condition}> <!-- select the parent of the ancestor w:p nodes and apply the appropriate templates --> <xsl:apply-templates select="(ancestor::w:p)/.." mode="backout" /> </xsl:when> </xsl:choose> </xsl:template> <!-- by using "mode", only this template will be applied to those matching nodes from the apply-templates above --> <xsl:template match="node()" mode="backout"> {CUSTOM FORMAT the node appropriately} </xsl:template> This whole concept works, BUT no matter what I've tried, It always applies the formatting from the CUSTOM FORMAT template to the w:p node, NOT it's parent node. It's almost as if you can't reference a parent from a matching node. And maybe you can't, but I haven't found any docs that say you can't Any ideas?

    Read the article

  • SL3 Grid RowDefinition Height Problem

    - by Chris
    I have a parent grid that contains multiple row definitions, all of which have their height set to 'auto'. Within the parent grid are individual grids - each individual grid contains a custom content control. When the custom content control loads, the height may increase. What I am noticing is that when the height does increase, the content overlaps with the content in other rows. I have specified the horizontal and vertical alignments - am I missing something? Here is an example: <Grid x:Name="LayoutRoot"> <Grid x:Name="ParentGrid>"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid Grid.Row="0"> <CustomContentControl/> </Grid> <Grid Grid.Row="1"> <CustomContentControl/> </Grid> <Grid Grid.Row="2"> <CustomContentControl/> </Grid> </Grid> </Grid>

    Read the article

  • How to read a parameter passed to a facelet from a backing bean

    - by Antonio
    Hi, I've written a facelet, and a corresponding backing bean, that implements user management (addition, deletion and so on). I'd want to be able to perform some custom processing when, for instance, a new user is added. There is a "create" button in the facelet, whose click event is handled by its backing bean. At the end of the event handler, I'd want to be able to call a method of another backing bean, which is not known because ideally the facelet can be used in several pages, with different custom processing. I thought to implement this feature by providing to the facelet a backing bean name and a method name, like this: <myfacelet:subaccounts backingBean="myBackingBean" createListener="createListener" /> and at the end of the event handler call #{myBackingBean.createListener} someway. I'm using this method (along with some overloads) to obtain a MethodExpression: protected MethodExpression getMethodExpression(String beanName, String methodName, Class<?> expectedReturnType, Class<?>[] expectedParamTypes) { ExpressionFactory expressionFactory; MethodExpression method; ELContext elContext; String el; el = String.format("#{%s['%s']}", beanName, methodName); expressionFactory = getApplication().getExpressionFactory(); elContext = getFacesContext().getELContext(); method = expressionFactory.createMethodExpression(elContext, el, expectedReturnType, expectedParamTypes); return method; } and the click event handler should look like: public void saveSubaccountListener(ActionEvent event) { MethodExpression method; ... method = getMethodExpression( "backingBean", "createSubaccountListener", SubuserBean.class); if (method != null) method.invoke( getFacesContext().getELContext(), new Object[] { _editedSubuser }); } That works fine as long as I provide an existing bean name (myBackingBean), but if I use backingBean the invoke() doesn't work due to the following error: javax.el.PropertyNotFoundException: Target Unreachable, identifier 'backingBean' resolved to null Is there a way I can retrieve from the facelet backing bean the value of a parameter that has been passed to the facelet? In my case, the value of backingBean, which should be myBackingBean? I've searched for and tried different solutions, but with no luck yet.

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • JQuery - Microsoft JScript runtime error: Object expected

    - by ydobonmai
    Below is the content of my .aspx page and the "jquery-ui-1.7.1.custom.min.js" is in the same location as the .aspx file. When I run the website with debugging, I get the below error. I know I am terribly missing something here. Any clue? Error :- Microsoft JScript runtime error: Object expected When I run without debugging, I get the following javascript error:- Line: 10 Error: 'jQuery' is undefined ASPX page content:- <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default3.aspx.cs" Inherits="Default3" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script src="jquery-ui-1.7.1.custom.min.js" type="text/javascript"></script> </head> <body> <form id="form1" runat="server"> <div> <script>$(function () { alert('hello') });</script> </div> </form> </body> </html>

    Read the article

  • What exactly does this PHP code do?

    - by Rob
    Alright, my friend gave me this code for requesting headers and comparing them to what the header should be. It works perfectly, but I'm not sure why. Here is the code: $headers = apache_request_headers(); $customheader = "Header: 7ddb6ffab28bb675215a7d6e31cfc759"; foreach ($headers as $header => $value) { // 1 $custom .= "$header: $value"; // 2 } $mystring = $custom; // 3 $findme = $customheader; // 4 $pos = strpos($mystring, $findme); if ($pos !== false) { // Do something } else{ exit(); } //If it doesn't match, exit. I commented with some numbers relating to the following questions: 1: What exactly is happening here? Is it setting the $headers as $header AND $value? 2: Again, don't have any idea what is going on here. 3: Why set the variable to a different variable? This is the only area where the variable is getting used, so is there a reason to set it to something else? 4: Same question as 3. I'm sorry if this is a terrible question, but its been bothering me, and I really want to know WHY it works. Well, I understand why it works, I guess I just want to know more specifically. Thanks for any insight you can provide.

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • ESXi with software iSCSI

    - by jharley
    Has anyone had any luck using the swiSCSI driver on ESXi? Following the instructions from VMWare.com I get to the point where I have the iSCSI HBA showing up but no LUNs/targets are showing up. The iSCSI target is running on Solaris 10 update 5 and works with other initiators fine. The ESXi initiator (from the logs) sees the targets but just logs in and out of them every 2 - 5 seconds. We're using unauthenticated discovery, and over and over in /var/log/messages I see: iSCSI: bus 0 target 0 trying to establish session 0xb203f90 to portal 0, address 10.1.100.9 port 3260 group 1 iSCSI: bus 0 target 0 establish session 0xb203f90 #4848 to port 0, address 10.1.100.9 port 3260 group 1, alias data/ESXi iSCSI: session 0xb203f90 dropping after receiving unexpected opcode 0x60 iSCSI: session 0xb203f90 to data/ESXi dropped iSCSI: session 0xb203f90 to data/ESXi waiting 2 seconds before next login attempt The only other thing that seems out of wack is that my 'Recent Tasks' pane keeps filling with 'Browse Diagnostic Manager' events and /var/log/vmware/hostd.log is filled with messages like this up to two times per second: [2008-09-19 16:05:57.901 'TaskManager' 196621 info] Task Created: haTask-ha-host-vim.DiagnosticManager.browser-776 [2008-09-19 16:05:57.094 'TaskManager' 196621 info] Task Completed: haTask-ha-host-vim.DiagnosticManager.browser-766 Any help would be appreciated.

    Read the article

  • drupal themes: how do I include several css files / js files on my theme's .info file?

    - by egarcia
    I'm creating a new Drupal theme. Until now, I only needed to include a single css file and a single js file. So my theme.info file had something like this: stylesheets[all][] = css/style.css scripts[] = js/script.js Now I must include jquery and jquery-ui in order to use a calendar date. These come with 2 new javascript files, and 1 additonal css file that I must add to the site. The calendar input form is going to be used in all pages (on a side block) so it is ok for me to load the extra css/javascript on all pages. I think the easiest thing would be to reference them on the .info file itself. At first I tried to just put them there with separate spaces: stylesheets[all][] = css/style.css css/ui-lightness/jquery-ui-1.8.1.custom.css scripts[] = js/script.js js/jquery-1.4.2.min.js js/jquery-ui-1.8.1.custom.min.js I emptied drupal's cache and... none of them loaded. I then tried separating each file with a comma, and flushing the cache again. Same result. I've browsed some drupal pages, but could not find how to add several javascript/css files on one theme (they always seem to add just 1 of each). So, how do I include several css/javascript files on the .info file?

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >