Daily Archives

Articles indexed Thursday November 7 2013

Page 14/19 | < Previous Page | 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • AJAX Submit to PHP still loading page after preventDefault()

    - by dannyburrows
    I have a webpage that I am using .ajax to send variables to a php script. The php page is loading into the browser as if I was navigating to it. The script is loading the data correctly, so my issue is stopping the navigation and keeping the user on the original page. The code for my form is here: echo "<form method='post' action='addTask.php' id='myform'>\n"; echo "<input name='addtask' id='addtask' maxlength='64'/><br/>\n"; echo "<input type='submit' name='submit' id='submit' value='Add Task'/>\n"; echo "</form>\n"; The code for my jquery is here: $(function(){ $('#myform').submit(function(e){ e.stopPropagation(); $.ajax({ url: 'addTask.php', type: 'POST', data: {}, success: alert("Success") }); }); }); I have tried: e.preventDefault(), e.stopPropagation() and return false. Any help is appreciated. $("#submit").click(function(e){ e.preventDefault(); $.ajax({ type: "POST", url: "addtask.php", data: { } }) .done(function() { alert( "success" ); }) .fail(function() { alert( "error" ); }); and $(function(){ $('#myform').submit(function(e){ e.preventDefault(); $.ajax({ url: 'addTask.php', type: 'POST', data: {}, success: function(){alert("Success")} }); }); });

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • Subscribing message sent from another application

    - by tonga
    I have two Java applications: AppOne and AppTwo. In AppOne, I used a JMS sender to publish a Topic. In both AppOne and AppTwo, I used a JMS MessageListener to subscribe to the message published by AppOne. I used ActiveMQ as my JMS broker and Spring JMS. However, I can only see the echoed message received by AppOne message listener. But I can't see the echoed message received by AppTwo listener. AppOne message listener is in the same application/project as the message publisher. But AppTwo message listener is in a different application/project. The AppOne listener class is: public class CustomerStatusListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage) { try { System.out.println("Subscriber 1 got you! The message is: " + ((TextMessage) message).getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException("Message must be of type TextMessage"); } } } It is invoked by a test calss JmsTest in AppOne: public class JmsTest { public static void main(String[] args) { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("message-bean.xml"); CustomerStatusSender messageSender = (CustomerStatusSender) context.getBean("customerMessageSender"); messageSender.simpleSend(); context.close(); } } The AppTwo listener class is: public class CustomerStatusMessageListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage) { try { System.out.println("Subscriber 2 got you! The message is: " + ((TextMessage) message).getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException( "Message must be of type TextMessage"); } } } The bean definition file for AppTwo where the Subscriber 2 lives in is: <bean id="connectionFactoryBean" class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp://localhost:61616" /> </bean> <!-- this is the Message Driven POJO (MDP) --> <bean id="customerStatusListener" class="com.mydomain.jms.CustomerStatusMessageListener" /> <!-- and this is the message listener container --> <bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactoryBean" /> <property name="destination" ref="topicBean" /> <property name="messageListener" ref="customerStatusListener" /> </bean> The bean id topicBean is the bean that is associated with the publisher. If both listener received the message sent from AppOne, I would have seen two echoed messages: Subscriber 1 got you! The message is: hello world Subscriber 2 got you! The message is: hello world But right now I only see the first line which means only the listener in AppOne got the message. So how to let the listener in AppTwo get the message? The first listener is in the same application as the sender, so it is easy to understand that it can get the message. But how about the second listener which is in a different application? What is the correct way to subscribe to a JMS Topic published in another application?

    Read the article

  • How to change the content of a tab while you are on the same tab using AngularJS and Bootstrap?

    - by user2958608
    Using AngularJS and Bootstrap, let say there are 3 tabs: tab1, tab2, and tab3. There are also some links on each tabs. Now for example, tab1 is active. The question is: how to change the content of the tab1 by clicking a link within the same tab? main.html: <div class="tabbable"> <ul class="nav nav-tabs"> <li ng-class="{active: activeTab == 'tab1'}"><a ng-click="activeTab = 'tab1'" href="">tab1</a></li> <li ng-class="{active: activeTab == 'tab2'}"><a ng-click="activeTab = 'tab2'" href="">tab2</a></li> <li ng-class="{active: activeTab == 'tab3'}"><a ng-click="activeTab = 'tab3'" href="">tab3</a></li> </ul> </div> <div class="tab-content"> <div ng-include="'/'+activeTab"></div> </div> tab1.html: <h1>TAB1</h1> <a href="/something">Something</a> something.html <h1>SOMETHING</h1> Now the question is how to change the tab1 content to something.html while the tab1 is active?

    Read the article

  • How to read pair in freemarker

    - by Lukasz Rys
    Ok i'm having little trouble with reading pair. So I'm creating my pair private Pair<Integer, Integer> count(somethink) { int c1 = 2; int c2 = 4; return new Pair<Integer, Integer>(c1, c2); } And 'sending' it to ftl via java mv.addObject("counted", count(somethink)); I won't write everythink how it sends because I dont think it really matters with my issue. So i'm recieving it in "ftl". Then i was trying to 'read' it <#list counted?keys as key> <a href="#offerOrderTab"><@spring.message "someMsg"/>(${key}/${counted[key]})</a> </#list> After then i'm getting error Expecting a string, date or number here, Expression x is instead a freemarker.ext.beans.SimpleMethodModel As i suppose you dont iterate pairs (or I'm wrong?) i know its pair that contains only one key and one value but still i have to do send it that way and I thought its goin be to similar to iterating through map, in java i would use pair.first() and pair.second() but it doesn't work in ftl (ye i know it shouldnt work). I also tried to cast it to String by using ?string but it didnt work too

    Read the article

  • R software : How to extract values from rasterstack with xy coordinates?

    - by Eddie
    I have a rasterstack(5 raster layers) that actually is a time series raster. r <- raster(nrow=20, ncol=200) s <- stack( sapply(1:5, function(i) setValues(r, rnorm(ncell(r), i, 3) )) ) > s class : RasterStack dimensions : 20, 200, 4000, 5 (nrow, ncol, ncell, nlayers) resolution : 1.8, 9 (x, y) extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0 names : layer.1, layer.2, layer.3, layer.4, layer.5 min values : -9.012146, -9.165947, -9.707269, -7.829763, -5.332007 max values : 11.32811, 11.97328, 15.99459, 15.66769, 16.72236 My objective is to plot each pixel and explore their behavior over time. How could I extract each pixels together with their x,y coordinates and plot a time series curve?

    Read the article

  • Jquery unidentified data return when post

    - by Nhat Tuan
    <?php //load.php $myid = $_POST['id']; if($myid == 1){ echo '1'; }else if($myid == 0){ echo '0'; } ?> <html> <input id="id" /> <div id="result"></div> <input type="button" id="submit" /> <script type="text/javascript"> $('#submit').click( function(){ var send = 'id=' + $('#id').val(); $.post('load.php', send, function(data){ if(data == 1){ $('#result').html('Success'); }else if(data == 0){ $('#result').html('Failure'); }else{ $('#result').html('Unknow'); } }); }); </script> </html> I test this script in some free host and it work but in my real host jquery unidentified data return and it alway show 'Unknow'. When i change if(data == '1') it show 'Unknow' too EX: input id = 1 click submit & data return is 'Unknow' Why ?? I think this error from host, because i test it in some free host and it work, but now in my real host i got this error, how i can fix it ?

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • Unable to Use Simple JSOUP Example To Parse Website Table Data

    - by OhNoItsAnOverflow
    I'm attempting to extract the following data from a table via Android / JSOUP however I'm having a bit of trouble nailing down the process. I think I'm getting close to being able to do this using the code I've provided below - but for some reason I still cannot get my textview to display any of the table data. P.S. Live URL's can be provided if necessary. SOURCE: public class MainActivity extends Activity { TextView tv; final String URL = "http://exampleurl.com"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); tv = (TextView) findViewById(R.id.TextView01); new MyTask().execute(URL); } private class MyTask extends AsyncTask<String, Void, String> { ProgressDialog prog; String title = ""; @Override protected void onPreExecute() { prog = new ProgressDialog(MainActivity.this); prog.setMessage("Loading...."); prog.show(); } @Override protected String doInBackground(String... params) { try { Document doc = Jsoup.connect(params[0]).get(); Element tableElement = doc.getElementsByClass("datagrid") .first(); title = doc.title(); } catch (IOException e) { e.printStackTrace(); } return title; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); prog.dismiss(); tv.setText(result); } } } TABLE: <table class="datagrid"> <tbody><tr> <th>Item No.</th> <th>Name</th> <th>Sex</th> <th>Location</th> </tr> <tr> <td><a href="redirector.cfm?ID=a33660a3-aae0-45e3-9703-d59d77717836&amp;page=1&amp;&amp;lname=&amp;fname=" title="501207593">501207593&nbsp;</a></td> <td>USER1</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> <tr> <td><a href="redirector.cfm?ID=edf524da-8598-450f-9373-da87db8d6c84&amp;page=1&amp;&amp;lname=&amp;fname=" title="501302750">501302750&nbsp;</a></td> <td>USER2</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> <tr> <td><a href="redirector.cfm?ID=a78abeea-7651-4ac1-bba2-0dcb272c8b77&amp;page=1&amp;&amp;lname=&amp;fname=" title="531201804">531201804&nbsp;</a></td> <td>USER3</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> </tbody></table>

    Read the article

  • Access a content control in C# when using Master Pages

    - by Guillaume Gervais
    Good day everyone, I am building a page in ASP.NET, and using Master Pages in the process. I have a Content Place Holder name "cphBody" in my Master Page, which will contain the body of each Page for which that Master Page is the Master Page. In the ASP.NET Web page, I have a Content tag (referencing "cphBody") which also contains some controls (buttons, Infragistics controls, etc.), and I want to access these controls in the CodeBehind file. However, I can't do that directly (this.myControl ...), since they are nested in the Content tag. I found a workaround with the FindControl method. ContentPlaceHolder contentPlaceHolder = (ContentPlaceHolder) Master.FindControl("cphBody"); ControlType myControl = (ControlType) contentPlaceHolder.FindControl("ControlName"); That works just fine. However, I am suspecting that it's not a very good design. Do you guys know a more elegant way to do so? Thank you! Guillaume Gervais.

    Read the article

  • What is the most efficient way to delete all selected items in a ListViewItem collection

    - by Andrew
    My user is able to select multiple items in a ListView collection that is configured to show details (that is, a list of rows). What I want to do is add a Delete button that will delete all of the selected items from the ListViewItem collection associated with the ListView. The collection of selected items is available in ListView.SelectedItems, but ListView.Items doesn't appear to have a single method that lets me delete the entire range. I have to iterate through the range and delete them one by one, which will potentially modify a collection I'm iterating over. Any hints? Edit: What I'm basically after is the opposite of AddRange().

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Browser not parsing PAC file properly?

    - by mfinni
    I have a long PAC file. The browser(s) (IE and Chrome) are configured to use it and it generally does what it says on the tin. I have a domain that continues to go through the proxy although it should be going direct. // Match specific hosts and IPs entered as hosts if (buncha stuff || shExpMatch(host,"(*.newmarketinc.com)") || shExpMatch(host,"(newmarketinc.com)") || buncha stuff ) return "DIRECT"; Pactester shows that anything in the domain should be direct. h:\pacparser\pactester.exe -p h:\pacfile -u http://daas.newmarketinc.com DIRECT But we continue to pass traffic to hosts in this domain via the proxy. Wireshark and Fiddler both show this. How do i figure out how my browser has gotten brain-damage? Traffic to other sites in this stanza does properly go direct, as confirmed by Fiddler and Wireshark.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

  • Reserve one http slot for /server-status?

    - by Stefan Lasiewski
    I have an Apache server which is hanging for some reason. When I normally want to check on the load of an Apache server, I tend to use mod_status via the URL at http://webserver1.example.org/server-status or from the commandline like service httpd fullstatus. However today, the Server is refusing all new connections. Some mysterious problem is causing connections to stall, which means that number of connections fills up all available connections (e.g. The number of connects exceeds the MaxClients setting), and therefore neither http://webserver1.example.org/server-status nor service httpd fullstatus can return anything. Is it possible to configure Apache to reserve one or two slots for the mod_status pages?

    Read the article

  • Directories Throwing 404 Errors - Virtual Host Configuration and mod_rewrite

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Can't connect to FTP server from a specific location

    - by wv_pip
    Last week while uploading website files to our server via FTP, the transfer failed. Ever since then, I haven't been able to connect to the server from work. I can connect just fine from home, or by using an FTP app on my cell phone as long as I'm on the cell network. I can't access the server from any machine on my work network. It's not a credential issue, either. The error message that I always get says that a connection cannot be established, and I am never prompted for my credentials. I have changed absolutely nothing on our domain controller or our firewall/router. I've contacted our ISP (who hosts the website/FTP server) and they can't find anything wrong on their end. They insist that it must be something here at the office that is blocking access. I've also tested access to other FTP servers (ea.com, nvidia.com, etc.) so I know that port 21 is not being blocked. I'm totally stumped. Any help is much appreciated. EDIT: wireshark info here: http://www.cloudshark.org/captures/85a118ae9296?filter=ip.dst%3D%3D66.118.64.208

    Read the article

  • Windows Task Scheduler fails on EventData instruction

    - by Pete
    The Scheduled Task fails on the Event Data instruction in this XML: <ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventData">Event/EventData/Data</Value> </ValueQueries> The other 2 fields can be passed as arguments and the EventData syntax matches other websites, so I don't know why it's failing. This is the Event Viewer XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Aptify.ExceptionManagerPublishedException" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-11-07T19:39:14.000000000Z" /> <EventRecordID>97555</EventRecordID> <Channel>Application</Channel> <Computer>[Computer Name]</Computer> <Security /> </System> <EventData> <Data>General Information ********************************************* Additional Info: ExceptionManager.MachineName: [Computer Name] ExceptionManager.TimeStamp: 11/7/2013 12:39:14 PM ExceptionManager.FullName: AptifyExceptionManagement, Version=4.0.0.0, Culture=neutral, PublicKeyToken=[key] ExceptionManager.AppDomainName: Aptify Shell.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: ACA_DOMAIN\pbassett 1) Exception Information ********************************************* Exception Type: Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntityValidationException Entity: Tasks ErrorString: Task Type "Make Contact" is not active. MachineName: [machine] CreatedDateTime: 11/7/2013 12:39:14 PM AppDomainName: Aptify Shell.exe ThreadIdentityName: WindowsIdentityName: [identity] Severity: 0 ErrorNumber: 0 Message: Task Type "Make Contact" is not active. Data: System.Collections.ListDictionaryInternal TargetSite: Boolean Save(Boolean, System.String ByRef, Sys tem.String) HelpLink: NULL Source: AptifyGenericEntity StackTrace Information ********************************************* at Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntity.Save(Boolean AllowGUI, String& ErrorString, String TransactionID)</Data> </EventData> </Event>

    Read the article

  • Outlook Anywhere remote https connection issue

    - by holian
    We have SBS 2003, and we use DYNDNS. We forward dyndns address 443 to local server ip 443 port. mycompany.dyndns.org:443 -- server.mycompany.local:443 In android phone i can check my mails with Outlook Active Snyc. From remote machine i can check my mails in owa (https://mycompany.dyndns.org/exchange) But i can't set up outlook 2013 to remote connect. I installed server.mycompany.local to remote machine trusted cert container, but i got error message: "There is a problem with the proxy server's security certificate. The name on the security certificate is invalid or does not match the name of the target site. Outlook is unable to connect to the proxy server. (Error Code 10)" Is it possible to connect exchange, via dnydns? Whats the problem? Thank you

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • How to block own rpcap traffic where tshark is running?

    - by Pankaj Goyal
    Platform :- Fedora 13 32-bit machine RemoteMachine$ ./rpcapd -n ClientMachine$ tshark -w "filename" -i "any interface name" As soon as capture starts without any capture filter, thousands of packets get captured. Rpcapd binds to 2002 port by default and while establishing the connection it sends a randomly chosen port number to the client for further communication. Both client and server machines exchange tcp packets through randomly chosen ports. So, I cannot even specify the capture filter to block this rpcap related tcp traffic. Wireshark & tshark for Windows have an option "Do not capture own Rpcap Traffic" in Remote Settings in Edit Interface Dialog box. But there is no such option in tshark for linux. It will be also better if anyone can tell me how wireshark blocks rpcap traffic....

    Read the article

  • How to approach taking a very diverse hybrid network and making something lean and cohesive

    - by Gregg Leventhal
    I am going to have an opportunity (from the role of Linux Sysadmin) to work on optimizing a corporate server network that has a lot of different application servers from LAMP stacks to JBOSS to IIS based ASP/.NET systems of all sorts. I am interested to hear how you would approach evaluating and consolidating a network in a situation like this where you are walking in cold? What are some of your go-to techniques?

    Read the article

  • How can I set audit controls on files owned by TrustedInstaller using Powershell?

    - by Drise
    I am trying to set audit controls on a number of files (listed in ACLsWin.txt) located in \%Windows%\System32 (for example, aaclient.dll) using the following Powershell script: $FileList = Get-Content ".\ACLsWin.txt" $ACL = New-Object System.Security.AccessControl.FileSecurity $AccessRule = New-Object System.Security.AccessControl.FileSystemAuditRule("Everyone", "Delete", "Failure") $ACL.AddAuditRule($AccessRule) foreach($File in $FileList) { Write-Host "Changing audit on $File" $ACL | Set-Acl $File } Whenever I run the script, I get the error PermissionDenied [Set-Acl] UnauthorizedAccessException. This seems to come from the fact that the owner of these files is TrustedInstaller. I am running these scripts as Administrator (even though I'm on the the built-in Administrator account) and it's still failing. I can set these audit controls by hand using the Security tab, but there are at least 200 files for which doing by hand may lead to human errors. How can I get around TrustedInstaller and set these audit controls using Powershell?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19  | Next Page >