Daily Archives

Articles indexed Tuesday June 3 2014

Page 1/19 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • TableView Cells Use Whole Screen Height

    - by Kyle
    I read through this tutorial Appcelerator: Using JSON to Build a Twitter Client and attempted to create my own simple application to interact with a Jetty server I setup running some Spring code. I basically call a get http request that gives me a bunch of contacts in JSON format. I then populate several rows with my JSON data and try to build a TableView. All of that works, however, my tableView rows take up the whole screen. Each row is one screen. I can scroll up and down and see all my data, but I'm trying to figure out what's wrong in my styling that's making the cells use the whole screen. My CSS is not great, so any help is appreciated. Thanks! Here's my js file that's loading the tableView: // create variable "win" to refer to current window var win = Titanium.UI.currentWindow; // Function loadContacts() function loadContacts() { // empty array "rowData" for table view cells var rowData = []; // create http client var loader = Titanium.Network.createHTTPClient(); // set http request method and url loader.setRequestHeader("Accept", "application/json"); loader.open("GET", "http://localhost:8080/contactsample/contacts"); // run the function when the data is ready for us to process loader.onload = function(){ Ti.API.debug("JSON Data: " + this.responseText); // evaluate json var contacts = JSON.parse(this.responseText); for(var i=0; i < contacts.length; i++) { var id = contacts[i].id; Ti.API.info("JSON Data, Row[" + i + "], ID: " + contacts[i].id); var name = contacts[i].name; Ti.API.info("JSON Data, Row[" + i + "], Name: " + contacts[i].name); var phone = contacts[i].phone; Ti.API.info("JSON Data, Row[" + i + "], Phone: " + contacts[i].phone); var address = contacts[i].address; Ti.API.info("JSON Data, Row[" + i + "], Address: " + contacts[i].address); // create row var row = Titanium.UI.createTableViewRow({ height:'auto' }); // create row's view var contactView = Titanium.UI.createView({ height:'auto', layout:'vertical', top:5, right:5, bottom:5, left:5 }); var nameLbl = Titanium.UI.createLabel({ text:name, left:5, height:24, width:236, textAlign:'left', color:'#444444', font:{ fontFamily:'Trebuchet MS', fontSize:16, fontWeight:'bold' } }); var phoneLbl = Titanium.UI.createLabel({ text: phone, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); var addressLbl = Titanium.UI.createLabel({ text: address, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); contactView.add(nameLbl); contactView.add(phoneLbl); contactView.add(addressLbl); row.add(contactView); row.className = "item" + i; rowData.push(row); } Ti.API.info("RowData: " + rowData); // create table view var tableView = Titanium.UI.createTableView( { data: rowData }); win.add(tableView); }; // send request loader.send(); } // get contacts loadContacts(); And here are some screens showing my problem. I tried playing with the top, bottom, right, left pixels a bit and didn't seem to be getting anywhere. All help is greatly appreciated. Thanks!

    Read the article

  • Any example on how to implement the new VerificationController and the KNOWN_TRANSACTIONS_KEY constant?

    - by Carles Estevadeordal
    I've been looking at implementing the new VerificationController to verify in-App-Purchases: http://developer.apple.com/library/ios/#releasenotes/StoreKit/IAP_ReceiptValidation/_index.html And I wonder if there is some example anywhere en how to validate a transaction, since it seems that the - (BOOL)verifyPurchase:(SKPaymentTransaction *)transaction; is not enough and it has to be implemented internally to verify the purchase when the data form the server is received. Another question is if anyone has a clue on what the KNOWN_TRANSACTIONS_KEY is and how to fill it, is it just the product id of the purchase?

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • nginx reverse proxy hide redirects

    - by NZCoderGuy
    I've Nginx as a reverse proxy for two sites A and B running behind. The users go from public - reverse proxy - site A - site B (from A to B clicking links) What would be a typical configuration for a scenario like this? The url in the browser should be always the reverse proxy This is what I've so far but is not working worker_processes 2; error_log logs/error.log info; events { worker_connections 1024; } http { server { resolver 127.0.0.1; listen 8080; location / { set $target 'siteA'; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; rewrite ^(.*) $1 break; proxy_pass http://$target; } } }

    Read the article

  • What can I do to prevent BIND from outputting these logs

    - by lacrosse1991
    I recently noticed that BIND has been producing a large amount of logs in /var/syslog relating to one particular server (ezdns) What can I do to prevent these logs from appearing? Why would this server be the only server that is causing BIND to produce these logs? I've search around google and have found a few different solutions for hiding these logs, but I would like to know why this one server is so troublesome

    Read the article

  • How do I deny all requests not from cloudflare?

    - by phillips1012
    I've recently gotten denial of service attacks from multiple proxy ips, so I installed cloudflare to prevent this. Then I started noticing that they're bypassing cloudflare by connecting directly to the server's ip address and forging the host header. What is the most performant way to return 403 on connections that aren't from the 18 ip addresses used by cloudflare? I tried denying all then explicitly allowing the cloudflare ips but this doesn't work since I've set it up so that CF-Connecting-IP sets the ip allow tests for. I'm using nginx 1.6.0.

    Read the article

  • Apache Consuming Resources

    - by Chris Edwards
    Our web server suddenly has been giving us load issues. After I restart Apache the load stays low for a few hours up to a day or so then its back up to around 3.0 until I restart Apache again. Any suggestions on tracking down what is causing this? Thanks! Chris Edwards top - 20:15:05 up 19 days, 10:59, 1 user, load average: 2.11, 2.17, 2.47 Tasks: 532 total, 6 running, 525 sleeping, 0 stopped, 1 zombie Cpu(s): 11.5%us, 0.4%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32842656k total, 13185872k used, 19656784k free, 6143740k buffers Swap: 1048568k total, 0k used, 1048568k free, 3515252k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19089 apache 20 0 1912m 1.5g 6584 R 99.6 4.9 71:01.53 /usr/sbin/httpd 21136 apache 20 0 392m 55m 5736 R 95.0 0.2 0:03.45 /usr/sbin/httpd 21139 apache 20 0 374m 38m 5808 S 40.5 0.1 0:04.91 /usr/sbin/httpd 21124 apache 20 0 389m 51m 5948 R 38.9 0.2 0:03.15 /usr/sbin/httpd 21111 apache 20 0 371m 35m 5964 S 18.8 0.1 0:01.22 /usr/sbin/httpd 21127 apache 20 0 375m 39m 5832 S 17.8 0.1 0:01.66 /usr/sbin/httpd 21128 apache 20 0 374m 38m 5792 S 16.2 0.1 0:01.56 /usr/sbin/httpd 21110 apache 20 0 374m 38m 5848 S 15.9 0.1 0:01.02 /usr/sbin/httpd 21113 apache 20 0 374m 38m 5836 S 15.9 0.1 0:02.16 /usr/sbin/httpd 21077 apache 20 0 379m 43m 6408 S 11.0 0.1 0:07.22 /usr/sbin/httpd 21101 apache 20 0 384m 49m 6988 R 5.8 0.2 0:04.47 /usr/sbin/httpd 21112 apache 20 0 374m 38m 5956 R 2.6 0.1 0:01.61 /usr/sbin/httpd

    Read the article

  • Server 2003 Terminal Services Printers not redirecting, no sessions created.

    - by mikerdz
    Ok, odd scenario on a Windows Server 2003 Server Standard running as Terminal Server. Friday, installed 2 new Windows 7 machines to replace older XP machines. After adding these machines and their local printers, none of the otehr 16 Windows 7 machines can redirect printing to the server. I have checked Global Policy on domain controller, nothing is being blocked. In Terminal Services Manager, the client settings are set to User Client Settings. On RDP client, port redirection is enabled. I have tried disabling the Use Client Settings option and manually selected the options for print redirection and default printer connection, but still does not work. After some reaserching, I found this MS article: http://support.microsoft.com/kb/2492632 I went ahead and added the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Wds\rdpwd\fEnablePrintRDR DWORD that the article references and set it to "1" to enable the option. I restarted the server, but still would not print. I am getting quite desperate with this issue because nothing seems to have changed when installing the two new clients and printers. I uninstalled the print drivers for the printers from the server. I have even gone as far as connecting each of the printers manually via UPD (\computername\printer) but even thought it works, it prints awfully slow. Please help!!!!

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • Theoretical Wi-Fi decay

    - by lithiium
    Is there a way to (theoretically at least) calculate the decay on bandwith of a Wifi related to the streght signal? For example, I know that I can theoretically expect 54Mbps of a 802.11g at 100%, which will be the bandwith expected at a 30% of signal? is it lineal? is it the same? I could not find any source for this, but considering the error replay involved, I guess it should be possible to calculate something like this. Anybody knows?

    Read the article

  • Correlating %RDY in esxtop to CPU Usage in Guest

    - by Joe
    We recently upgrade a number of our VmWare hosts from 4.1 to 5.5 and noticed many of the VMs saw a step-wise jump in CPU usage as shown by the guest VM. We have not yet upgraded vmwaretools on any of the guests, but after investigating a bit more we saw many of these guests with a high %RDY value (50%) when viewed under esxtop. Unfortunately Linux (the guest) just shows "high CPU usage" without any insight into what portion of that is coming from %RDY (VmWare saying, "your guest is waiting on CPU from the host"). Are there any tools, /proc entries, etc. that can shed light on that information?

    Read the article

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • can't connect to Sql Sever Management Express 2012

    - by Rare-Man
    i installed Sql Sever Management Express 2012 , but when i try to connect in Sql management studio enviroment , i have this error . TITLE: Connect to Server Cannot connect to .. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476 The system cannot find the file specified BUTTONS: OK ................................................................................... and in during installion i dont have option for select cluster !! this is my SQL Server Configuration Manager , my sql server service is empty ... And when get Remove a Failover Cluster Node , this error happened . http://oi57.tinypic.com/2lrvat.jpg

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • All nework interfaces hang for seconds while one interface goes up/down

    - by user3698377
    I am building a client/server application that uses several network interfaces in parallel for redundancy, and I have noticed that while one network interface goes down or goes up, the communication on other interfaces hangs for several seconds. I could reproduce this behavior without my application in a simple way: there are 2 interfaces available on computer 1 ( Ethernet and WiFi ) ping from computer 2 the IP address of the Ethernet connection of computer 1 disconnect the WiFi of computer 1 ping hangs for seconds, and then the packets are traveling again between the 2 computers. The hanging happens as well if I turn back on the WiFi connection on computer 1. It happens as well if I ping the WiFi IP, and turn off/on the Ethernet connection ( or unplug/plug the cable). I am using Linux Ubuntu 12.04 on both computers. Any ideas why is this happening, and if / how can it be avoided?

    Read the article

  • iptables: allowing incoming for 192.168.1.0/24 allowed incoming for all?

    - by nortally
    The internal side of my ISP router has three devices: ISP router 128.128.43.1 Firewall router 128.128.43.2 Server 128.128.43.3 Behind the Firewall router is a NAT network using 192.168.100.n/24 This question is regarding iptables running on the Server. I wanted to allow access to port 8080 only from the NAT clients behind the Firewall router, so I used this rule -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT This worked, but UNEXPECTEDLY ALLOWED GLOBAL ACCESS, which resulted in our JBOSS server getting compromised. I now know that the correct rule is to use the Firewall router's address instead of the internal network, but can anyone explain why the first rule allowed global access? I would have expected it to just fail. Full config, mostly lifted from a RedHat server: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :Firewall-1-INPUT - [0:0] -A INPUT -j Firewall-1-INPUT -A FORWARD -j Firewall-1-INPUT -A Firewall-1-INPUT -i lo -j ACCEPT -A Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow ssh from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow https from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow JBOSS from Firewall" ### THIS RESULTED IN GLOBAL ACCESS TO PORT 8080 ### -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT ### THIS WORKED -A Firewall-1-INPUT -s 128.128.43.2 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPt ### -A Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

  • stat command filesize reporting on busybox

    - by datadevil
    I'm trying to write a shell script in busybox to check the filesize of a file. Having read that stat is more reliable then ls, I decided to use that, but somehow when using the following command: stat -c %s filename I get the following output: 559795. This goes for the following 2 files (shown using ls -la): 0 Jan 20 16:32 foo_empty 4 Jan 20 16:32 foo_not_empty Anyone know what's happening there? I can just go back to using ls, but I'm not understanding what's happening here, and that's bothering me..

    Read the article

  • How to add PTR record for a /16 IP block in BIND using $GENERATE directive?

    - by yegle
    I'm trying to reverse map a block of IP using PTR record to some special name so their usage can be easily reflected by a simple nslookup. For example, here's a nslookup result: # nslookup 172.17.201.101 Server: 10.253.33.1 Address: 10.253.33.1#53 101.201.17.172.in-addr.arpa name = for.internal.use.only. And I learned that I can add PTR record for a /24 block by using $GENERATE directive $GENERATE 0-254 $.201.17.172 PTR for.internal.use.only. So here's the question: Am I doing right exposing infomation of IP address by adding PTR record? Any better idea? If the question above is YES, then how to add PTR record for a /16 IP range? I know I can write 255 lines of $GENTERATE directive but any better solution?

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Subversion: Can't move... Permission Denied

    - by yalestar
    Whilst trying to commit some files to SVN, we're suddenly all getting this error Can't move '/usr/local/svn/articles/db/txn-protorevs/2002-8.rev' to '/usr/local/svn/articles/db/revs/2/2003': Permission denied I checked the permissions in the repository, and they look the same as all our other repositories, yet this is the only repo that causes the error. Any ideas how I can fix this? SVN is running as root on Linux via svnserve, FWIW.

    Read the article

  • Open source alternative for Canonical Landscape? [on hold]

    - by netvope
    From Canonical: Landscape is an easy-to-use systems management and monitoring service that enables you to manage multiple Ubuntu machines as easily as one through a simple Web-based interface. However, Landscape is not free. The RedHat counterpart Satellite has a free version called Spacewalk, but it doesn't work on Ubuntu. (There is an attempt to port Spacewalk to Debian, but it doesn't look like it's stable yet.) Are there any open source alternative to Landscape? Better yet, are there any Spacewalk-like software that works for both RedHat-based and Debian-based systems?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >