Search Results

Search found 73679 results on 2948 pages for 'get http client info'.

Page 190/2948 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Ajax request not receiving xml from Django

    - by amougeot
    I have a Django server which handles requests to a URL which will return some HTML for use in an image gallery. I can navigate to the URL and the browser will display the HTML that is returned, but I can't get that same HTML by doing an AJAX call (using jQuery) to the same URL. This is the view that generates the response: def gallery_images(request, gallery_name): return render_to_response('galleryimages.html', {'images': get_images_of_gallery(gallery_name)}, mimetype='text/xml') This is the 'galleryimages.html' template: {% for image in images %} <div id="{{image.name}}big"> <div class="actualImage" style="background-image:url({{image.image.name}});"> <h1>{{image.caption|safe}}</h1> </div> </div> {% endfor %} This is the jQuery call I am making: $("#allImages").load("http://localhost:8000/galleryimages/Web"); However, this loads nothing into my #allImages div. I've used firebug and ran jQuery's Ajax method .get("http://localhost:8000/galleryimages/Web") and firebug says that the response text is completely empty. When I check my Django server log, this is the entry I see for when I navigate to the URL manually, through my browser: [16/Jan/2010 17:34:10] "GET /galleryimages/Web HTTP/1.1" 200 215 This is the entry in the server log for when I make the AJAX call: [16/Jan/2010 17:36:19] "OPTIONS /galleryimages/Web HTTP/1.1" 200 215 Why does the AJAX request not get the xml that my Django page is serving?

    Read the article

  • Is this the best way to make an API request using PHP CURL?

    - by Abs
    Hello all, I have a site that has a simple API which can be used via http. I wish to make use of the API and submit data about 1000-1500 times at one time. Here is their API: http://api.jum.name/ I have constructed the URL to make a submission but now I am wondering what is the best way to make these 1000-1500 API GET requests? Here is the PHP CURL implementation I was thinking of: $add = 'http://www.mysite.com/3rdparty/API/api.php?fn=post&username=test&password=tester&url=http://google.com&category=21&title=story a&content=content text&tags=Season,news'; curl_setopt ($ch, CURLOPT_URL, "$add"); curl_setopt ($ch, CURLOPT_POST, 0); curl_setopt ($ch, CURLOPT_COOKIEFILE, 'files/cookie.txt'); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $postdata = curl_exec ($ch); Shall I close the CURL connection every time I make a submission? Can I re-write the above in a better way that will make these 1000-1500 submissions quicker? Thanks all

    Read the article

  • How to notify client about updated UpdatePanel content on server side

    - by csh1981
    I have a problem with UpdatePanel.Update() which works initially but then stops. I have tumbled with this problem for some time and some background is needed so please read ahead. I have an ASP.net application in which I have a subpage that display computed information in graphs. Each graph is embedded in an UpdatePanel. The graph is a user control that uses the standard asp:Chart for display. My task is to enable this page with AJAX capabilities so the page is responsive during postbacks. When I access this page from another page, during the initial page rendering, I use a wait dialog for each graph and a pageload event on the client side. In the client event, a hidden button is clicked which a server event handles (the hidden button is inside an UpdatePanel so the postback is asynchronous). Each graph is computed and the UpdatePanels are in turn updated with the Chart content. This is done using UpdatePanel.Update. And it is successful. However, I also have some RadioButtons on the page. These are dynamically created. The purpose of them is to switch graph type --- to show the same data in a different way. Same type of time consuming computation is needed in order to do so. I subscribe on each RadioButton's OnCheckedChanged event and the postback is asynchronous since the radiobuttons are inside an UpdatePanel. In the server event handler I determine the type of graph and use this as an input to the Chart control. I then remove the old Chart control from my Panel and adds new Chart and then I call UpdatePanel.Update(). But with no success. Nothing happens, no errors, nothing. Why is this?? I think this is strange because if I compute every Chart data in the initial rendering instead of using the "Wait dialog"-solution described earlier then I can select graph types successfully and all subsequent AJAX requests work as intended. Also, the same code (computing the chart, removal, and adding the Chart control to Panel and UpdatePanel.Update()) is hit during the initial rendering of the page, and it works only the first time. Here is the method that computes the graph and adds it to the panel and update the UpdatePanel: public void UpdateGraph(GraphType type, GraphMapper mapper) { //Panel is the content of UpdatePanelGraph's Panel.Controls.Clear(); chart = new Chart(type, mapper); //Computation happens inside here panel.Controls.Add(chart); //UpdatePanelGraph is in UpdateMode Conditional and has //ChildrenAsTriggers set to false UpdatePanelGraph.Update(); } I really need a way for these radiobuttons to work, possible using some clientside JavaScript or another way of handling things on the server side. I have thought about using a JavaScript postback call on the UpdatePanel instead of the UpdatePanel.Update(). However, the issue I have here is how to notify the client side when the server side is finished with computing the graph? An plausible explanation of the strange behavior is also much appreciated. Any help appreciated, thanks

    Read the article

  • client-side validation in custom validation attribute - asp.net mvc 4

    - by Giorgos Manoltzas
    I have followed some articles and tutorials over the internet in order to create a custom validation attribute that also supports client-side validation in an asp.net mvc 4 website. This is what i have until now: RequiredIfAttribute.cs [AttributeUsage(AttributeTargets.Property, AllowMultiple = true)] //Added public class RequiredIfAttribute : ValidationAttribute, IClientValidatable { private readonly string condition; private string propertyName; //Added public RequiredIfAttribute(string condition) { this.condition = condition; this.propertyName = propertyName; //Added } protected override ValidationResult IsValid(object value, ValidationContext validationContext) { PropertyInfo propertyInfo = validationContext.ObjectType.GetProperty(this.propertyName); //Added Delegate conditionFunction = CreateExpression(validationContext.ObjectType, _condition); bool conditionMet = (bool)conditionFunction.DynamicInvoke(validationContext.ObjectInstance); if (conditionMet) { if (value == null) { return new ValidationResult(FormatErrorMessage(null)); } } return ValidationResult.Success; } private Delegate CreateExpression(Type objectType, string expression) { LambdaExpression lambdaExpression = System.Linq.Dynamic.DynamicExpression.ParseLambda(objectType, typeof(bool), expression); //Added Delegate function = lambdaExpression.Compile(); return function; } public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) { var modelClientValidationRule = new ModelClientValidationRule { ValidationType = "requiredif", ErrorMessage = ErrorMessage //Added }; modelClientValidationRule.ValidationParameters.Add("param", this.propertyName); //Added return new List<ModelClientValidationRule> { modelClientValidationRule }; } } Then i applied this attribute in a property of a class like this [RequiredIf("InAppPurchase == true", "InAppPurchase", ErrorMessage = "Please enter an in app purchase promotional price")] //Added "InAppPurchase" public string InAppPurchasePromotionalPrice { get; set; } public bool InAppPurchase { get; set; } So what i want to do is display an error message that field InAppPurchasePromotionalPrice is required when InAppPurchase field is true (that means checked in the form). The following is the relevant code form the view: <div class="control-group"> <label class="control-label" for="InAppPurchase">Does your app include In App Purchase?</label> <div class="controls"> @Html.CheckBoxFor(o => o.InAppPurchase) @Html.LabelFor(o => o.InAppPurchase, "Yes") </div> </div> <div class="control-group" id="InAppPurchasePromotionalPriceDiv" @(Model.InAppPurchase == true ? Html.Raw("style='display: block;'") : Html.Raw("style='display: none;'"))> <label class="control-label" for="InAppPurchasePromotionalPrice">App Friday Promotional Price for In App Purchase: </label> <div class="controls"> @Html.TextBoxFor(o => o.InAppPurchasePromotionalPrice, new { title = "This should be at the lowest price tier of free or $.99, just for your App Friday date." }) <span class="help-inline"> @Html.ValidationMessageFor(o => o.InAppPurchasePromotionalPrice) </span> </div> </div> This code works perfectly but when i submit the form a full post is requested on the server in order to display the message. So i created JavaScript code to enable client-side validation: requiredif.js (function ($) { $.validator.addMethod('requiredif', function (value, element, params) { /*var inAppPurchase = $('#InAppPurchase').is(':checked'); if (inAppPurchase) { return true; } return false;*/ var isChecked = $(param).is(':checked'); if (isChecked) { return false; } return true; }, ''); $.validator.unobtrusive.adapters.add('requiredif', ['param'], function (options) { options.rules["requiredif"] = '#' + options.params.param; options.messages['requiredif'] = options.message; }); })(jQuery); This is the proposed way in msdn and tutorials i have found Of course i have also inserted the needed scripts in the form: jquery.unobtrusive-ajax.min.js jquery.validate.min.js jquery.validate.unobtrusive.min.js requiredif.js BUT...client side validation still does not work. So could you please help me find what am i missing? Thanks in advance.

    Read the article

  • Apache logs 200 instead of 404

    - by elle
    I've been getting the following in my apache access log: "GET /work//?module=www&section=working=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ%0000 HTTP/1.1" 200 5187 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" If I try the URL, I get a 404 instead of 200 which the above request received. Is there a way I can confirm that the 200 was real and not spoofed? Where is the long info on the client coming from?

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    Hi there, there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • MVC2 client/server validation of DateTime/Date using DataAnnotations

    - by Thomas
    The following are true: One of my columns (BirthDate) is of type Date in SQL Server. This very same column (BirthDate) is of type DateTime when EF generates the model. I am using JQuery UI Datepicker on the client side to be able to select the BirthDate. I have the following validation logic in my buddy class: [Required(ErrorMessageResourceType = typeof(Project.Web.ValidationMessages), ErrorMessageResourceName = "Required")] [RegularExpression(@"\b(0?[1-9]|1[012])[/](0?[1-9]|[12][0-9]|3[01])[/](19|20)?[0-9]{2}\b", ErrorMessageResourceType = typeof(Project.Web.ValidationMessages), ErrorMessageResourceName = "Invalid")] public virtual DateTime? BirthDate { get; set; } There are two issues with this: This will not pass server side validation (if I enable client side validation it works just fine). I am assuming that this is because the regular expression doesn't take into account hours, minutes, seconds as the value in the text box has already been cast as a DateTime on the server by the time validation occurs. If data already exists in the database and is read into the model and displayed on the page the BirthDate field shows hours, minutes, seconds in my text box (which I don't want). I can always use ToShortDateString() but I am wondering if there is some cleaner approach that I might be missing. Thanks

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

  • .NET HttpListener Prefix issue with anything other than localhost

    - by jchristner
    I'm trying to use C# and HttpListener with a prefix of anything other than localhost and it fails (i.e. if I give it "server1", i.e. http://localhost:1234 works, but http://server1:1234 fails The code is... HttpListener listener = new HttpListener(); String prefix = @"http://server1:1234"; listener.Prefixes.Add(prefix); listener.Start(); The failure occurs on listener.Start() with an exception of "Access is denied.".

    Read the article

  • In WCF How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • no namenode error in pseudo-mode

    - by Anshu Basia
    I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried http://localhost/50070....it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost/50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up. Below is the error log: ************************************************************/ 2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.203.0 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 ************************************************************/ 2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times **2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist. 2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/ Any help is appreciated Thank-you

    Read the article

  • 401 Unauthorized using oauth with twitter on iPhone

    - by menkel
    I use Twitter-OAuth-iPhone. http://github.com/bengottlieb/Twitter-OAuth-iPhone/ when i try to login on twitte, I received an error 401 i received the delegate call for - (void) OAuthTwitterController: (SA_OAuthTwitterController *) controller authenticatedWithUsername: (NSString *) username { but when i try to send a tweet just after [_engine sendUpdate: [NSString stringWithFormat:@"Hello World"]]; i received an error 401 failed with error: Error Domain=HTTP Code=401 "Operation could not be completed. (HTTP error 401.)" i have check my twitter application setting Type: Client Default Access type: Read & Write

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • EclEmma JAVA Code coverage - Unable to coverage service layer of RESTful Webservice

    - by Radhika
    I am using EMMA eclipse plugin to generate code coverage reports. My application is a RESTFul webservice. Junits are written such that a client is created for the webservice and invoked with various inputs. However EMMA shows 0% coverage for the source folder. The test folder alone is covered. The application server(jetty server) is started using a main method. Report: Element Coverage Covered Instructions Total Instructions MyRestFulService 13.6% 900 11846 src 0.5% 49 10412 test 98% 1021 1434 Junit Test method: @Test public final void testAddFlow() throws Exception { Client c = Client.create(); WebResource webResource = c.resource(BASE_URI); // Sample files for Add String xhtmlDocument = null; Iterator iter = mapOfAddFiles.entrySet().iterator(); while (iter.hasNext()) { Map.Entry pairs = (Map.Entry) iter.next(); try { document = helper.readFile(requestPath + pairs.getKey()); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } /* POST */ MultiPart multiPart = new MultiPart(); multiPart.bodyPart(.... ........... ClientResponse response = webResource.path("/add").type( MEDIATYPE_MULTIPART_MIXED).post(ClientResponse.class, multiPart); assertEquals("TESTING ADD FOR >>>>>>> " + pairs.getKey(), Status.OK, response.getClientResponseStatus()); } } } Invoked service method: @POST @Path("add") @Consumes("multipart/mixed") public Response add(MultiPart multiPart) throws Exception { Status status = null; List<BodyPart> bodyParts = null; bodyParts = multiPart.getBodyParts(); status = //call to business layer return Response.ok(status).build(); }

    Read the article

  • Cannot create list in SharePoint 2010 using Client Object Model

    - by Boris
    I am trying to utilize SharePoint 2010 Client Object Model to create a List based on a custom template. Here's my code: public void MakeList(string title, string listTemplateGUID, int localeIdentifier) { string message; string listUrl; List newList; Guid template; ListCreationInformation listInfo; Microsoft.SharePoint.Client.ListCollection lists; try { listUrl = title.Replace(spaceChar, string.Empty); template = GetListTemplate((uint)localeIdentifier, listTemplateGUID); listInfo = new ListCreationInformation(); listInfo.Url = listUrl; listInfo.Title = title; listInfo.Description = string.Empty; listInfo.TemplateFeatureId = template; listInfo.QuickLaunchOption = QuickLaunchOptions.On; clientContext.Load(site); clientContext.ExecuteQuery(); lists = site.Lists; clientContext.Load(lists); clientContext.ExecuteQuery(); newList = lists.Add(listInfo); clientContext.ExecuteQuery(); } catch (ServerException ex) { //... } } Now, this particular part, newList = lists.Add(listInfo); clientContext.ExecuteQuery(); the one that is supposed to create the actual list, throws an exception: Message: Cannot complete this action. Please try again. ServerErrorCode: 2130239231 ServerErrorTypeName: Microsoft.SharePoint.SPException Could anyone please help me realize what am I doing wrong? Thanks.

    Read the article

  • EJB3 JNDI Lookup Failure in JEE application client

    - by Hank
    I'm trying to access an EJB3 from a JEE client-application, but keep getting nothing but lookup failures. My JEE Application 'CoreServer' is exposing a number of beans with remote interfaces. I have no problem accessing them from a Web Application deployed on the same Glassfish v3.0.1. Now I'm trying to access it from a client-application: public class Main { public static void main(String[] args) { CampaignControllerRemote bean = null; try { InitialContext ctx = new InitialContext(); bean = (CampaignControllerRemote) ctx.lookup("java:global/CoreServer/CampaignController"); } catch (Exception e) { System.out.println(e.getMessage()); } if (bean != null) { Campaign campaign = bean.get(361); if (campaign != null) { System.out.println("Got "+ campaign); } } } } When I run deploy it to Glassfish and run it from the appclient, I get this error: Lookup failed for 'java:global/CoreServer/CampaignController' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=localhost,orb'sInitialPort=3700 However, that's exactly the same JNDI-name I use when I lookup the bean from the WebApplication (via SessionContext, not InitialContext - does that matter?). Also, when I deploy 'CoreServer', Glassfish reports: Portable JNDI names for EJB CampaignController : [java:global/CoreServer/CampaignController!mvs.api.CampaignControllerRemote, java:global/CoreServer/CampaignController] Glassfish-specific (Non-portable) JNDI names for EJB CampaignController : [mvs.api.CampaignControllerRemote, mvs.api.CampaignControllerRemote#mvs.api.CampaignControllerRemote] I tried all four names, none worked. Is the appclient unable to access beans with (only) Remote interfaces?

    Read the article

  • Does SmtpClient class represent POP3 client or…?

    - by SourceC
    I assume that web controls (such as the PasswordRecovery control) use SmtpClient to send email messages. If so, does SmtpClient represent a POP3 client or does SmtpClient forward email message to POP3 client? Do attributes specified inside <smtp> element in web.config map to SmtpClient class? <system.net> <mailSettings> <smtp deliveryMethod="Network" ...></smtp> </mailSettings> </system.net> One of the possible values for the attribute deliveryMethod is Network, which tells that email should be sent through the network to an SMTP server. In other words, this value tells to send email to SMTP server using SMTP protocol?! For the PasswordRecovery control to be able to send email messages, we need to set basic properties in <MailDefinition> subelement of the PasswordRecovery control. Thus I assume MailDefinition is used by controls to create an email message?!

    Read the article

  • Weird certificate error when trying to generate web service client from secure site

    - by Vlad
    Dear stack overflow. I get a weird error when trying to use AXIS1.4 Wsdl2Java tool to generate client code for the web service that is installed on the secure IIS site. When I run the tool I get the following SSL exception: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching XXXXXXX.net found at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:174) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1 591) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:187) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:181) at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(Clien tHandshaker.java:975) at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHa ndshaker.java:123) at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:5 16) at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.jav a:454) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.j ava:884) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SS LSocketImpl.java:1096) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketIm pl.java:1123) Weird thing is that this error only occurs when I run WSDL2Java, and only for this particular server. I have another web server with the identical set-up and everything works fine there. I triple checked all the keystores and it looks like all the CA certificates are loaded correctly. I tried using another server with the identical setup, and was able to generate the client proxy code without any problems. Weird thing is that if I use the code generated from the other server against the weird server everything works fine. It is only Wsdl2Java that is giving me a problem.

    Read the article

  • ejb testing issues with netbeans and openejb

    - by SibzTer
    I have created a netbeans 6.7 EnterpriseApplication project with ejb and war modules with a test stateless session ejb with a simple sayHello() method. I also added the openEjb library in order to unit test the ejb. Everything runs fine except that I keep getting the following error: Testsuite: com.myapp.test.NewEmptyJUnitTest Apache OpenEJB 3.1.1 build: 20090530-06:18 http://openejb.apache.org/ INFO - openejb.home = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - openejb.base = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service) INFO - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager) INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Found EjbModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Beginning load: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Configuring enterprise application: classpath.ear WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant-launcher.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: xml-resolver-1.2.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: webservices-tools.jar java.lang.Exception: Could not load 1/0/com/sun/codemodel/CodeWriter.class at org.apache.xbean.finder.ClassFinder.readClassDef(ClassFinder.java:730) .... Turns out that I am getting the glassfish library webservices-tools.jar from somewhere somehow and I cant find out how to get rid of it so that I dont get the bunch of Exceptions whenever I try to run any junit tests. Has anyone faced this issue before? Can you help me resolve it please? Thanks.

    Read the article

  • PHP Server did not recognize the value of HTTP Header SOAPAction

    - by Joe
    I am making my first SOAPclient and I am stuck with the Headers, I am getting a response and when I look at my request it has a soap:body but no soap:headers. The web service has needs 3 parameters 1.UserName 2.Password 3.errorMessage This is the code I have set up. $SOAPAction = 'http://localhost/DriveAwayPriceCalculation/PriceCalculation'; //Namespace of the WS. // $SoapHeaders = array('User123' => $UserName, 'Password123' => $Password, '' => $errorMessage); $client = new nusoap_client("https://test.com/CalculationWS.asmx?WSDL", false, $UserName, $Password, $errorMessage); $headers = new SoapHeader('http://localhost/DriveAwayPriceCalculation/PriceCalculation', true, $SoapHeaders); As I said I am just starting out in SOAP (and PHP) so if you could help, it would be great. Thanks

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >