Search Results

Search found 9211 results on 369 pages for 'radix sort'.

Page 277/369 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • How do I find and kill a php loop (process)?

    - by Hoytman
    I have a php script that I have been developing which calls an external api within a loop. It is being tested on a VPS which is running LAMP on Debian. I noticed this morning that the api was not responding to my script. When I called the provider, they told me that my server had been calling the api 1000's of times per hour for the past 10 hours. I am assuming that a php script (which I have been working on the day before and testing on my VPS) entered into an infinite loop during one of the executions, and never came out (I have been testing it from the command prompt, and not over the web.) I have attempted to stop and start Apache, but the api support staff says that the calls are still coming in from my server address. How can I find and stop the process? Also, is there a possibility that the Apache stop/start solved the problem, but the api is still trying to sort through past calls? Please forgive me for not using my local test environment correctly. Edit: I do not know the process name, I need to discover the name (or pid) based on behavior.

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • What should I use to ping multiple IPs and get notified of time outs?

    - by HumanVirus
    I've been using MultiPing to ping hundreds of IPs (from access points and such) and check their performance (packet loss, latency) and uptime. The program is very easy to use, but I was wondering if someone could recommend me something that would work better and that would also work in Linux. The features I'm looking for are: Notification Types: At least desktop notifications and SMS, but it would be great if it also had e-mail, IM, or other types of notifications. (MultiPing has some of these, but they don't work too well.) Being notified about the root problem only: Since some devices are dependent on others, I'd like to be notified only about the root problem. E.g. Let's say I have A[x.x.x.222]B[x.x.x.33C[x.x.x.44]D[x.x.x.55], and B goes down, therefore C and D will also be down. Is it possible to get a notification only about B being down? Light on resources. Ideally multiplatform or at least available for both Linux and Windows. I've heard about Nagios and Shinken being used for monitoring. Would you recommend that I use something of the sort or would that be too much for my needs? If using Nagios, Shinken, or similar software is recommended, can anyone tell me what sites I should go to or what books I should get that would be good for someone who is totally new at this? I'd appreciate any suggestions.

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

  • What could cause random files being uploaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • Enterprise level Ticketing and inventory system reccomendations [closed]

    - by TrackingSystem
    My company is sort off at a stand still when it comes to our technician ticketing and inventory system. We currently use Numara TrackIt! - which isn't cutting it to say the least. Dell recommended KACE, but it's web based which is what we would like to avoid. We need a good ticketing and inventory system with the following: Server/Client setup Client supports XP/Windows 7 Ent. Web Based as well as Client is a plus Technician ticketing Active Directory integration Inventory System (Asset tag tracking etc/PO tracking) Exchange integrated - when tickets are made you have an option to send to the requester. Something that will scale well Please, if anyone is a Systems Admin or has knowledge regarding use of a great ticketing system please let me know. We have a large international corporation - price honestly isn't an issue. Keep in mind this will be mainly used for technicians to create tickets, enter inventory(track PCS) and possible even an option to track purchase orders. We want an enterprise level ticketing system with these capabilities please help! Thank you.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: OpenDNS fails to resolve mail.google.com to its IP, My ISP sends me a page showing search results for 'mail.google.com' Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Excel 2013: VLookup for cells that share common characters within cell but are both surrounded by other non-matching text

    - by Kylie Z
    I am pulling information from 2 different databases. The databases use different naming protocol for the exact same item/specified placement however they always have certain components of the name in common. The length of these names can vary throughout each of the databases (see the pic below) so I don't think counting characters would help. I need a formula (probably a vlookup/match/index of some sort) to pair up the names from the 2nd database name with the 1st database name and then place it in the adjacent column(B2) on sheet1. Until this point I've had to match, copy, and paste the pairs manually from one sheet to the other and it takes FOREVER. Any help would be much appreciated!!! For example: Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A13: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS728X90_728X90_DFA Common Factors: "ROSMSNAUTOSMASSACHUSETTS" & "728X90" Therefore A2 and A13 need to pair up In some cases, Database 1 and 2 will have a common name aspect but sizing will be different. They need to have BOTH aspects in common in order to be paired so I would NOT want the below example to pair up. Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A12: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS300X250_300X250_DFA Common Factor: Only "ROSMSNAUTOSMASSACHUSETTS" matches. "728x90" is not equal to "300X250" - Sizing is different so they should not be paired.

    Read the article

  • Fixing bent pins on a CPU

    - by Pekka
    While replacing a mainboard in a desktop machine (see related question), I did something stupid. I inserted the CPU into the new mainboard, but didn't check for the right position. When it didn't immediately lock in, I pressed slightly before realizing what was wrong. The result was a number of bent pins. I tried every tutorial that popped up when Googling "CPU bent pins" - using credit cards, sewing needles, and a hunting knife to get the pins back into position - but to no avail: For every pin I get straightened out, two others are bent. I have no problem getting individual pins straightened out, but my many attempts have led to many pins being slightly askew - enough for the CPU not to fit into the socket (An AMD X3 one). Maybe I just lack the motoric finesse. What I would need is some sort of a grid to fix all pins at once. It's a €50 processor so the loss is not catastrophic. But I thought before I go buy a new one, I thought I'd check here whether anybody knows some magic trick, or a cheap generally-available tool to fix this.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • why does my dl380 G3 crank the fan up so high (and how do i stop it?)

    - by smoofra
    I've got a HP DL380 and after I leave it on for a while it decides it needs to run the fans at a much higher speed than it did before. It's really loud and annoying. Is there any way to manually control the fan speed? Or otherwise get it to stop doing that? Thanks. edit: I guess i should have made it clear that this is a bit of a jury-rigged situation. I know the ideal solution is to put the thing in a server room with nice cool air. Unfortunately, that isn't happening. This is the sort of problem that calls for a jury rigged solution like manually setting the fan speed and accepting the fact that it's going to run hot shutting down a CPU (can i do that?) spinning down the disks when they're not in use (is that possible?) solution: Install HP's system health monitoring daemon, hpasmd. I installed it to try to figure out what was going on, and just running it fixed the problem.

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Would a USB hub work in reverse?

    - by Tim
    Imagine for a moment with a 4 port USB hub. Normally how this would work is the hub has one plug that goes to the computer, then 4 ports that you can plug in other things to (thumb drive, keyboard, mouse etc). I am wondering if I can use it in reverse. So I would have 1 keyboard going in to the hub, and then plug in male to male usb cables from the 4 ports to 4 different PCs, my aim is that when a key is pressed on the keyboard all 4 PCs will receive it as if the keyboard were plugged in to them. Does anyone know if this would work? And if not does anyone have any ideas how I could get the same effect? EDIT: So I am looking for more of a KVM switch type device rather than a USB hub. However all of the KVM switches I've found use some sort of mechanism to select which computer you'll be using. (some are physical switches / buttons, others do it via software "automatically" some how) However I need to have 1 keyboard hooked up to 2 computers and when I press a key on the keyboard I want the keypress to be sent to both computers simultaneously, not to one or the other. Does anyone know if KVMs with this feature exist?

    Read the article

  • Custom filtering in Android using ArrayAdapter

    - by Alxandr
    I'm trying to filter my ListView which is populated with this ArrayAdapter: package me.alxandr.android.mymir.adapters; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Set; import me.alxandr.android.mymir.R; import me.alxandr.android.mymir.model.Manga; import android.content.Context; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Filter; import android.widget.SectionIndexer; import android.widget.TextView; public class MangaListAdapter extends ArrayAdapter<Manga> implements SectionIndexer { public ArrayList<Manga> items; public ArrayList<Manga> filtered; private Context context; private HashMap<String, Integer> alphaIndexer; private String[] sections = new String[0]; private Filter filter; private boolean enableSections; public MangaListAdapter(Context context, int textViewResourceId, ArrayList<Manga> items, boolean enableSections) { super(context, textViewResourceId, items); this.filtered = items; this.items = filtered; this.context = context; this.filter = new MangaNameFilter(); this.enableSections = enableSections; if(enableSections) { alphaIndexer = new HashMap<String, Integer>(); for(int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while(it.hasNext()) keyList.add(it.next()); Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); } } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if(v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.mangarow, null); } Manga o = items.get(position); if(o != null) { TextView tt = (TextView) v.findViewById(R.id.MangaRow_MangaName); TextView bt = (TextView) v.findViewById(R.id.MangaRow_MangaExtra); if(tt != null) tt.setText(o.getName()); if(bt != null) bt.setText(o.getLastUpdated() + " - " + o.getLatestChapter()); if(enableSections && getSectionForPosition(position) != getSectionForPosition(position + 1)) { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setText(sections[getSectionForPosition(position)]); h.setVisibility(View.VISIBLE); } else { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setVisibility(View.GONE); } } return v; } @Override public void notifyDataSetInvalidated() { if(enableSections) { for (int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while (it.hasNext()) { keyList.add(it.next()); } Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); super.notifyDataSetInvalidated(); } } public int getPositionForSection(int section) { if(!enableSections) return 0; String letter = sections[section]; return alphaIndexer.get(letter); } public int getSectionForPosition(int position) { if(!enableSections) return 0; int prevIndex = 0; for(int i = 0; i < sections.length; i++) { if(getPositionForSection(i) > position && prevIndex <= position) { prevIndex = i; break; } prevIndex = i; } return prevIndex; } public Object[] getSections() { return sections; } @Override public Filter getFilter() { if(filter == null) filter = new MangaNameFilter(); return filter; } private class MangaNameFilter extends Filter { @Override protected FilterResults performFiltering(CharSequence constraint) { // NOTE: this function is *always* called from a background thread, and // not the UI thread. constraint = constraint.toString().toLowerCase(); FilterResults result = new FilterResults(); if(constraint != null && constraint.toString().length() > 0) { ArrayList<Manga> filt = new ArrayList<Manga>(); ArrayList<Manga> lItems = new ArrayList<Manga>(); synchronized (items) { Collections.copy(lItems, items); } for(int i = 0, l = lItems.size(); i < l; i++) { Manga m = lItems.get(i); if(m.getName().toLowerCase().contains(constraint)) filt.add(m); } result.count = filt.size(); result.values = filt; } else { synchronized(items) { result.values = items; result.count = items.size(); } } return result; } @SuppressWarnings("unchecked") @Override protected void publishResults(CharSequence constraint, FilterResults results) { // NOTE: this function is *always* called from the UI thread. filtered = (ArrayList<Manga>)results.values; notifyDataSetChanged(); } } } However, when I call filter('test') on the filter nothing happens at all (or the background-thread is run, but the list isn't filtered as far as the user conserns). How can I fix this?

    Read the article

  • Some problems with GridView in webpart with multiple filters.

    - by NF_81
    Hello, I'm currently working on a highly configurable Database Viewer webpart for WSS 3.0 which we are going to need for several customized sharepoint sites. Sorry in advance for the large wall of text, but i fear it's necessary to recap the whole issue. As background information and to describe my problem as good as possible, I'll start by telling you what the webpart shall do: Basically the webpart contains an UpdatePanel, which contains a GridView and an SqlDataSource. The select-query the Datasource uses can be set via webbrowseable properties or received from a consumer method from another webpart. Now i wanted to add a filtering feature to the webpart, so i want a dropdownlist in the headerrow for each column that should be filterable. As the select-query is completely dynamic and i don't know at design time which columns shall be filterable, i decided to add a webbrowseable property to contain an xml-formed string with filter information. So i added the following into OnRowCreated of the gridview: void gridView_RowCreated(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) { for (int i = 0; i < e.Row.Cells.Count; i++) { if (e.Row.Cells[i].GetType() == typeof(DataControlFieldHeaderCell)) { string headerText = ((DataControlFieldHeaderCell)e.Row.Cells[i]).ContainingField.HeaderText; // add sorting functionality if (_allowSorting && !String.IsNullOrEmpty(headerText)) { Label l = new Label(); l.Text = headerText; l.ForeColor = Color.Blue; l.Font.Bold = true; l.ID = "Header" + i; l.Attributes["title"] = "Sort by " + headerText; l.Attributes["onmouseover"] = "this.style.cursor = 'pointer'; this.style.color = 'red'"; l.Attributes["onmouseout"] = "this.style.color = 'blue'"; l.Attributes["onclick"] = "__doPostBack('" + panel.UniqueID + "','SortBy$" + headerText + "');"; e.Row.Cells[i].Controls.Add(l); } // check if this column shall be filterable if (!String.IsNullOrEmpty(filterXmlData)) { XmlNode columnNode = GetColumnNode(headerText); if (columnNode != null) { string dataValueField = columnNode.Attributes["DataValueField"] == null ? "" : columnNode.Attributes["DataValueField"].Value; string filterQuery = columnNode.Attributes["FilterQuery"] == null ? "" : columnNode.Attributes["FilterQuery"].Value; if (!String.IsNullOrEmpty(dataValueField) && !String.IsNullOrEmpty(filterQuery)) { SqlDataSource ds = new SqlDataSource(_conStr, filterQuery); DropDownList cbx = new DropDownList(); cbx.ID = "FilterCbx" + i; cbx.Attributes["onchange"] = "__doPostBack('" + panel.UniqueID + "','SelectionChange$" + headerText + "$' + this.options[this.selectedIndex].value);"; cbx.Width = 150; cbx.DataValueField = dataValueField; cbx.DataSource = ds; cbx.DataBound += new EventHandler(cbx_DataBound); cbx.PreRender += new EventHandler(cbx_PreRender); cbx.DataBind(); e.Row.Cells[i].Controls.Add(cbx); } } } } } } } GetColumnNode() checks in the filter property, if there is a node for the current column, which contains information about the Field the DropDownList should bind to, and the query for filling in the items. In cbx_PreRender() i check ViewState and select an item in case of a postback. In cbx_DataBound() i just add tooltips to the list items as the dropdownlist has a fixed width. Previously, I used AutoPostback and SelectedIndexChanged of the DDL to filter the grid, but to my disappointment it was not always fired. Now i check __EVENTTARGET and __EVENTARGUMENT in OnLoad and call a function when the postback event was due to a selection change in a DDL: private void FilterSelectionChanged(string columnName, string selectedValue) { columnName = "[" + columnName + "]"; if (selectedValue.IndexOf("--") < 0 ) // "-- All --" selected { if (filter.ContainsKey(columnName)) filter[columnName] = "='" + selectedValue + "'"; else filter.Add(columnName, "='" + selectedValue + "'"); } else { filter.Remove(columnName); } gridView.PageIndex = 0; } "filter" is a HashTable which is stored in ViewState for persisting the filters (got this sample somewhere on the web, don't remember where). In OnPreRender of the webpart, i call a function which reads the ViewState and apply the filterExpression to the datasource if there is one. I assume i had to place it here, because if there is another postback (e.g. for sorting) the filters are not applied any more. private void ApplyGridFilter() { string args = " "; int i = 0; foreach (object key in filter.Keys) { if (i == 0) args = key.ToString() + filter[key].ToString(); else args += " AND " + key.ToString() + filter[key].ToString(); i++; } dataSource.FilterExpression = args; ViewState.Add("FilterArgs", filter); } protected override void OnPreRender(EventArgs e) { EnsureChildControls(); if (WebPartManager.DisplayMode.Name == "Edit") { errMsg = "Webpart in Edit mode..."; return; } if (useWebPartConnection == true) // get select-query from consumer webpart { if (provider != null) { dataSource.SelectCommand = provider.strQuery; } } try { int currentPageIndex = gridView.PageIndex; if (!String.IsNullOrEmpty(m_SortExpression)) { gridView.Sort("[" + m_SortExpression + "]", m_SortDirection); } gridView.PageIndex = currentPageIndex; // for some reason, the current pageindex resets after sorting ApplyGridFilter(); gridView.DataBind(); } catch (Exception ex) { Functions.ShowJavaScriptAlert(Page, ex.Message); } base.OnPreRender(e); } So i set the filterExpression and the call DataBind(). I don't know if this is ok on this late stage.. don't have a lot of asp.net experience after all. If anyone can suggest a better solution, please give me a hint. This all works great so far, except when i have two or more filters and set them to a combination that returns zero records. Bam ... gridview gone, completely - without a possiblity of changing the filters back. So i googled and found out that i have to subclass gridview in order to always show the headerrow. I found this solution and implemented it with some modifications. The headerrow get's displayed and i can change the filters even if the returned result contains no rows. But finally to my current problem: When i have two or more filters set which return zero rows, and i change back one filter to something that should return rows, the gridview remains empty (although the pager is rendered). I have to completly refresh the page to reset the filters. When debugging, i can see in the overridden CreateChildControls of the grid, that the base method indeed returns 0, but anyway... the gridView.RowCount remains 0 after databinding. Anyone have an idea what's going wrong here?

    Read the article

  • Moving items from one tableView to another tableView with extra's

    - by Totumus Maximus
    Let's say I have 2 UITableViews next to eachother on an ipad in landscape-mode. Now I want to move multiple items from one tableView to the other. They are allowed to be inserted on the bottom of the other tableView. Both have multiSelection activated. Now the movement itself is no problem with normal cells. But in my program each cell has an object which contains the consolidationState of the cell. There are 4 states a cell can have: Basic, Holding, Parent, Child. Basic = an ordinary cell. Holding = a cell which contains multiple childs but which wont be shown in this state. Parent = a cell which contains multiple childs and are shown directly below this cell. Child = a cell created by the Parent cell. The object in each cell also has some array which contains its children. The object also holds a quantityValue, which is displayed on the cell itself. Now the movement gets tricky. Holding and Parent cells can't move at all. Basic cells can move freely. Child cells can move freely but based on how many Child cells are left in the Parent. The parent will change or be deleted all together. If a Parent cell has more then 1 Child cell left it will stay a Parent cell. Else the Parent has no or 1 Child cell left and is useless. It will then be deleted. The items that are moved will always be of the same state. They will all be Basic cells. This is how I programmed the movement: *First I determine which of the tableViews is the sender and which is the receiver. *Second I ask all indexPathsForSelectedRows and sort them from highest row to lowest. *Then I build the data to be transferred. This I do by looping through the selectedRows and ask their object from the sender's listOfItems. *When I saved all the data I need I delete all the items from the sender TableView. This is why I sorted the selectedRows so I can start at the highest indexPath.row and delete without screwing up the other indexPaths. *When I loop through the selectedRows I check whether I found a cell with state Basic or Child. *If its a Basic cell I do nothing and just delete the cell. (this works fine with all Basic Cells) *If its a Child cell I go and check it's Parent cell immidiately. Since all Child cells are directly below the Parent cell and no other the the Parent's Childs are below that Parent I can safely get the path of the selected Childcell and move upwards and find it's Parent cell. When this Parent cell is found (this will always happen, no exceptions) it has to change accordingly. *The Parent cell will either be deleted or the object inside will have its quantity and children reduced. *After the Parent cell has changed accordingly the Child cell is deleted similarly like the Basic cells *After the deletion of the cells the receiver tableView will build new indexPaths so the movedObjects will have a place to go. *I then insert the objects into the listOfItems of the receiver TableView. The code works in the following ways: Only Basic cells are moved. Basic cells and just 1 child for each parent is moved. A single Basic/Child cell is moved. The code doesn't work when: I select more then 1 or all childs of some parent cell. The problem happens somewhere into updating the parent cells. I'm staring blindly at the code now so maybe a fresh look will help fix things. Any help will be appreciated. Here is the method that should do the movement: -(void)moveSelectedItems { UITableView *senderTableView = //retrieves the table with the data here. UITableView *receiverTableView = //retrieves the table which gets the data here. NSArray *selectedRows = senderTableView.indexPathsForSelectedRows; //sort selected rows from lowest indexPath.row to highest selectedRows = [selectedRows sortedArrayUsingSelector:@selector(compare:)]; //build up target rows (all objects to be moved) NSMutableArray *targetRows = [[NSMutableArray alloc] init]; for (int i = 0; i<selectedRows.count; i++) { NSIndexPath *path = [selectedRows objectAtIndex:i]; [targetRows addObject:[senderTableView.listOfItems objectAtIndex:path.row]]; } //delete rows at active for (int i = selectedRows.count-1; i >= 0; i--) { NSIndexPath *path = [selectedRows objectAtIndex:i]; //check what item you are deleting. act upon the status. Parent- and HoldingCells cant be selected so only check for basic and childs MyCellObject *item = [senderTableView.listOfItems objectAtIndex:path.row]; if (item.consolidatedState == ConsolidationTypeChild) { for (int j = path.row; j >= 0; j--) { MyCellObject *consolidatedItem = [senderTableView.listOfItems objectAtIndex:j]; if (consolidatedItem.consolidatedState == ConsolidationTypeParent) { //copy the consolidated item but with 1 less quantity MyCellObject *newItem = [consolidatedItem copyWithOneLessQuantity]; //creates a copy of the object with 1 less quantity. if (newItem.quantity > 1) { newItem.consolidatedState = ConsolidationTypeParent; [senderTableView.listOfItems replaceObjectAtIndex:j withObject:newItem]; } else if (newItem.quantity == 1) { newItem.consolidatedState = ConsolidationTypeBasic; [senderTableView.listOfItems removeObjectAtIndex:j]; MyCellObject *child = [senderTableView.listOfItems objectAtIndex:j+1]; child.consolidatedState = ConsolidationTypeBasic; [senderTableView.listOfItems replaceObjectAtIndex:j+1 withObject:child]; } else { [senderTableView.listOfItems removeObject:consolidatedItem]; } [senderTableView reloadData]; } } } [senderTableView.listOfItems removeObjectAtIndex:path.row]; } [senderTableView deleteRowsAtIndexPaths:selectedRows withRowAnimation:UITableViewRowAnimationTop]; //make new indexpaths for row animation NSMutableArray *newRows = [[NSMutableArray alloc] init]; for (int i = 0; i < targetRows.count; i++) { NSIndexPath *newPath = [NSIndexPath indexPathForRow:i+receiverTableView.listOfItems.count inSection:0]; [newRows addObject:newPath]; DLog(@"%i", i); //scroll to newest items [receiverTableView setContentOffset:CGPointMake(0, fmaxf(receiverTableView.contentSize.height - recieverTableView.frame.size.height, 0.0)) animated:YES]; } //add rows at target for (int i = 0; i < targetRows.count; i++) { MyCellObject *insertedItem = [targetRows objectAtIndex:i]; //all moved items will be brought into the standard (basic) consolidationType insertedItem.consolidatedState = ConsolidationTypeBasic; [receiverTableView.ListOfItems insertObject:insertedItem atIndex:receiverTableView.ListOfItems.count]; } [receiverTableView insertRowsAtIndexPaths:newRows withRowAnimation:UITableViewRowAnimationNone]; } If anyone has some fresh ideas of why the movement is bugging out let me know. If you feel like you need some extra information I'll be happy to add it. Again the problem is in the movement of ChildCells and updating the ParentCells properly. I could use some fresh looks and outsider ideas on this. Thanks in advance. *updated based on comments

    Read the article

  • XSL using apply templates and match instead of call template

    - by AdRock
    I am trying to make the transition from using call-template to using applay templates and match but i'm not getting any data displayed only what is between the volunteer tags. When i use call template it works fine but it was suggested that i use applay-templates and match and not it doesn't work Any ideas how to make this work? I can then applay it to all my stylesheets. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="volunteers-by-region" match="volunteer" use="region" /> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Volunteers</title> <link rel="stylesheet" type="text/css" href="volunteer.css" /> </head> <body> <h1>Registered Volunteers</h1> <h3>Ordered by the username ascending</h3> <h3>Grouped by the region</h3> <xsl:for-each select="folktask/member[user/account/userlevel='2']"> <xsl:for-each select="volunteer[count(. | key('volunteers-by-region', region)[1]) = 1]"> <xsl:sort select="region" /> <xsl:for-each select="key('volunteers-by-region', region)"> <xsl:sort select="folktask/member/user/personal/name" /> <div class="userdiv"> <xsl:apply-templates/> <!--<xsl:call-template name="member_userid"> <xsl:with-param name="myid" select="../user/@id" /> </xsl:call-template> <xsl:call-template name="member_name"> <xsl:with-param name="myname" select="../user/personal/name" /> </xsl:call-template>--> </div> </xsl:for-each> </xsl:for-each> </xsl:for-each> <xsl:if test="position()=last()"> <div class="count"><h2>Total number of volunteers: <xsl:value-of select="count(/folktask/member/user/account/userlevel[text()=2])"/></h2></div> </xsl:if> </body> </html> </xsl:template> <xsl:template match="folktask/member"> <xsl:apply-templates select="user/@id"/> <xsl:apply-templates select="user/personal/name"/> </xsl:template> <xsl:template match="user/@id"> <div class="heading bold"><h2>USER ID: <xsl:value-of select="." /></h2></div> </xsl:template> <xsl:template match="user/personal/name"> <div class="small bold">NAME:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> </xsl:stylesheet> and my xml file <folktask xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="folktask.xsd"> <member> <user id="1"> <personal> <name>Abbie Hunt</name> <sex>Female</sex> <address1>108 Access Road</address1> <address2></address2> <city>Wells</city> <county>Somerset</county> <postcode>BA5 8GH</postcode> <telephone>01528927616</telephone> <mobile>07085252492</mobile> <email>[email protected]</email> </personal> <account> <username>AdRock</username> <password>269eb625e2f0cf6fae9a29434c12a89f</password> <userlevel>4</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="1"> <roles></roles> <region>South West</region> </volunteer> </member> <member> <user id="2"> <personal> <name>Aidan Harris</name> <sex>Male</sex> <address1>103 Aiken Street</address1> <address2></address2> <city>Chichester</city> <county>Sussex</county> <postcode>PO19 4DS</postcode> <telephone>01905149894</telephone> <mobile>07784467941</mobile> <email>[email protected]</email> </personal> <account> <username>AmbientExpert</username> <password>8e64214160e9dd14ae2a6d9f700004a6</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="2"> <roles>Van Driver</roles> <region>South Central</region> </volunteer> </member> <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="3"> <roles>Scaffold Erector</roles> <region>South West</region> </volunteer> </member> </folktask>

    Read the article

  • PHP sorting issue with simpleXML

    - by tugbucket
    test.xml: <?xml version="1.0"?> <props> <prop> <state statename="Mississippi"> <info> <code>a1</code> <location>Jackson</location> </info> <info> <code>d2</code> <location>Gulfport</location> </info> <info> <code>g6</code> <location>Hattiesburg</location> </info> </state> <state statename="Texas"> <info> <code>i9</code> <location>Dallas</location> </info> <info> <code>a7</code> <location>Austin</location> </info> </state> <state statename="Maryland"> <info> <code>s5</code> <location>Mount Laurel</location> </info> <info> <code>f0</code> <location>Baltimore</location> </info> <info> <code>h4</code> <location>Annapolis</location> </info> </state> </prop> </props> test.php // start the sortCities function sortCities($a, $b){ return strcmp($a->location, $b->location); } // start the sortStates function sortStates($t1, $t2) { return strcmp($t1['statename'], $t2['statename']); } $props = simplexml_load_file('test.xml'); foreach ($props->prop as $prop) { $sortedStates = array(); foreach($prop->state as $states) { $sortedStates[] = $states; } usort($sortedStates, "sortStates"); // finish the sortStates /* --- */ echo '<pre>'."\n"; print_r($sortedStates); echo '</pre>'."\n"; /* --- */ foreach ($prop->children() as $stateattr) { // this doesn't do it //foreach($sortedStates as $hotel => @attributes){ // blargh! if(isset($stateattr->info)) { $statearr = $stateattr->attributes(); echo '<optgroup label="'.$statearr['statename'].'">'."\n"; $options = array(); foreach($stateattr->info as $info) { $options[] = $info; } usort($options, "sortCities"); // finish the sortCities foreach($options as $stateattr => $info){ echo '<option value="'.$info->code.'">'.$info->location.'</option>'."\n"; } echo '</optgroup>'."\n"; } else { //empty nodes don't do squat } } } ?> This is the array that: print_r($sortedStates); prints out: Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Maryland ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => s5 [location] => Mount Laurel ) [1] => SimpleXMLElement Object ( [code] => f0 [location] => Baltimore ) [2] => SimpleXMLElement Object ( [code] => h4 [location] => Annapolis ) ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Mississippi ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => a1 [location] => Jackson ) [1] => SimpleXMLElement Object ( [code] => d2 [location] => Gulfport ) [2] => SimpleXMLElement Object ( [code] => g6 [location] => Hattiesburg ) ) ) [2] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Texas ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => i9 [location] => Dallas ) [1] => SimpleXMLElement Object ( [code] => a7 [location] => Austin ) ) ) ) this: // start the sortCities function sortCities($a, $b){ return strcmp($a->location, $b->location); } plus this part of code: $options = array(); foreach($stateattr->info as $info) { $options[] = $info; } usort($options, "sortCities"); // finish the sortCities foreach($options as $stateattr => $info){ echo '<option value="'.$info->code.'">'.$info->location.'</option>'."\n"; } is doing a fine job of sorting by the 'location' node within each optgroup. You can see that in the array I can make it sort by the attribute 'statename'. What I am having trouble with is echoing out and combining the two functions in order to have it auto sort both the states and the cities within and forming the needed optgroups. I tried copying the lines for the cities and changing the names called several ways to no avail.

    Read the article

  • Visual Studio C++ list iterator not decementable

    - by user69514
    I keep getting an error on visual studio that says list iterator not decrementable: line 256 My program works fine on Linux, but visual studio compiler throws this error. Dammit this is why I hate windows. Why can't the world run on Linux? Anyway, do you see what my problem is? #include <iostream> #include <fstream> #include <sstream> #include <list> using namespace std; int main(){ /** create the list **/ list<int> l; /** create input stream to read file **/ ifstream inputstream("numbers.txt"); /** read the numbers and add them to list **/ if( inputstream.is_open() ){ string line; istringstream instream; while( getline(inputstream, line) ){ instream.clear(); instream.str(line); /** get he five int's **/ int one, two, three, four, five; instream >> one >> two >> three >> four >> five; /** add them to the list **/ l.push_back(one); l.push_back(two); l.push_back(three); l.push_back(four); l.push_back(five); }//end while loop }//end if /** close the stream **/ inputstream.close(); /** display the list **/ cout << "List Read:" << endl; list<int>::iterator i; for( i=l.begin(); i != l.end(); ++i){ cout << *i << " "; } cout << endl << endl; /** now sort the list **/ l.sort(); /** display the list **/ cout << "Sorted List (head to tail):" << endl; for( i=l.begin(); i != l.end(); ++i){ cout << *i << " "; } cout << endl; list<int> lReversed; for(i=l.begin(); i != l.end(); ++i){ lReversed.push_front(*i); } cout << "Sorted List (tail to head):" << endl; for(i=lReversed.begin(); i!=lReversed.end(); ++i){ cout << *i << " "; } cout << endl << endl; /** remove first biggest element and display **/ l.pop_back(); cout << "List after removing first biggest element:" << endl; cout << "Sorted List (head to tail):" << endl; for( i=l.begin(); i != l.end(); ++i){ cout << *i << " "; } cout << endl; cout << "Sorted List (tail to head):" << endl; lReversed.pop_front(); for(i=lReversed.begin(); i!=lReversed.end(); ++i){ cout << *i << " "; } cout << endl << endl; /** remove second biggest element and display **/ l.pop_back(); cout << "List after removing second biggest element:" << endl; cout << "Sorted List (head to tail):" << endl; for( i=l.begin(); i != l.end(); ++i){ cout << *i << " "; } cout << endl; lReversed.pop_front(); cout << "Sorted List (tail to head):" << endl; for(i=lReversed.begin(); i!=lReversed.end(); ++i){ cout << *i << " "; } cout << endl << endl; /** remove third biggest element and display **/ l.pop_back(); cout << "List after removing third biggest element:" << endl; cout << "Sorted List (head to tail):" << endl; for( i=l.begin(); i != l.end(); ++i){ cout << *i << " "; } cout << endl; cout << "Sorted List (tail to head):" << endl; lReversed.pop_front(); for(i=lReversed.begin(); i!=lReversed.end(); ++i){ cout << *i << " "; } cout << endl << endl; /** create frequency table **/ const int biggest = 1000; //create array size of biggest element int arr[biggest]; //set everything to zero for(int j=0; j<biggest+1; j++){ arr[j] = 0; } //now update number of occurences for( i=l.begin(); i != l.end(); i++){ arr[*i]++; } //now print the frequency table. only print where occurences greater than zero cout << "Final list frequency table: " << endl; for(int j=0; j<biggest+1; j++){ if( arr[j] > 0 ){ cout << j << ": " << arr[j] << " occurences" << endl; } } return 0; }//end main

    Read the article

  • questions regarding the use of A* with the 15-square puzzle

    - by Cheeso
    I'm trying to build an A* solver for a 15-square puzzle. The goal is to re-arrange the tiles so that they appear in their natural positions. You can only slide one tile at a time. Each possible state of the puzzle is a node in the search graph. For the h(x) function, I am using an aggregate sum, across all tiles, of the tile's dislocation from the goal state. In the above image, the 5 is at location 0,0, and it belongs at location 1,0, therefore it contributes 1 to the h(x) function. The next tile is the 11, located at 0,1, and belongs at 2,2, therefore it contributes 3 to h(x). And so on. EDIT: I now understand this is what they call "Manhattan distance", or "taxicab distance". I have been using a step count for g(x). In my implementation, for any node in the state graph, g is just +1 from the prior node's g. To find successive nodes, I just examine where I can possibly move the "hole" in the puzzle. There are 3 neighbors for the puzzle state (aka node) that is displayed: the hole can move north, west, or east. My A* search sometimes converges to a solution in 20s, sometimes 180s, and sometimes doesn't converge at all (waited 10 mins or more). I think h is reasonable. I'm wondering if I've modeled g properly. In other words, is it possible that my A* function is reaching a node in the graph via a path that is not the shortest path? Maybe have I not waited long enough? Maybe 10 minutes is not long enough? For a fully random arrangement, (assuming no parity problems), What is the average number of permutations an A* solution will examine? (please show the math) I'm going to look for logic errors in my code, but in the meantime, Any tips? (ps: it's done in Javascript). Also, no, this isn't CompSci homework. It's just a personal exploration thing. I'm just trying to learn Javascript. EDIT: I've found that the run-time is highly depend upon the heuristic. I saw the 10x factor applied to the heuristic from the article someone mentioned, and it made me wonder - why 10x? Why linear? Because this is done in javascript, I could modify the code to dynamically update an html table with the node currently being considered. This allowd me to peek at the algorithm as it was progressing. With a regular taxicab distance heuristic, I watched as it failed to converge. There were 5's and 12's in the top row, and they kept hanging around. I'd see 1,2,3,4 creep into the top row, but then they'd drop out, and other numbers would move up there. What I was hoping to see was 1,2,3,4 sort of creeping up to the top, and then staying there. I thought to myself - this is not the way I solve this personally. Doing this manually, I solve the top row, then the 2ne row, then the 3rd and 4th rows sort of concurrently. So I tweaked the h(x) function to more heavily weight the higher rows and the "lefter" columns. The result was that the A* converged much more quickly. It now runs in 3 minutes instead of "indefinitely". With the "peek" I talked about, I can see the smaller numbers creep up to the higher rows and stay there. Not only does this seem like the right thing, it runs much faster. I'm in the process of trying a bunch of variations. It seems pretty clear that A* runtime is very sensitive to the heuristic. Currently the best heuristic I've found uses the summation of dislocation * ((4-i) + (4-j)) where i and j are the row and column, and dislocation is the taxicab distance. One interesting part of the result I got: with a particular heuristic I find a path very quickly, but it is obviously not the shortest path. I think this is because I am weighting the heuristic. In one case I got a path of 178 steps in 10s. My own manual effort produce a solution in 87 moves. (much more than 10s). More investigation warranted. So the result is I am seeing it converge must faster, and the path is definitely not the shortest. I have to think about this more. Code: var stop = false; function Astar(start, goal, callback) { // start and goal are nodes in the graph, represented by // an array of 16 ints. The goal is: [1,2,3,...14,15,0] // Zero represents the hole. // callback is a method to call when finished. This runs a long time, // therefore we need to use setTimeout() to break it up, to avoid // the browser warning like "Stop running this script?" // g is the actual distance traveled from initial node to current node. // h is the heuristic estimate of distance from current to goal. stop = false; start.g = start.dontgo = 0; // calcHeuristic inserts an .h member into the array calcHeuristicDistance(start); // start the stack with one element var closed = []; // set of nodes already evaluated. var open = [ start ]; // set of nodes to evaluate (start with initial node) var iteration = function() { if (open.length==0) { // no more nodes. Fail. callback(null); return; } var current = open.shift(); // get highest priority node // update the browser with a table representation of the // node being evaluated $("#solution").html(stateToString(current)); // check solution returns true if current == goal if (checkSolution(current,goal)) { // reconstructPath just records the position of the hole // through each node var path= reconstructPath(start,current); callback(path); return; } closed.push(current); // get the set of neighbors. This is 3 or fewer nodes. // (nextStates is optimized to NOT turn directly back on itself) var neighbors = nextStates(current, goal); for (var i=0; i<neighbors.length; i++) { var n = neighbors[i]; // skip this one if we've already visited it if (closed.containsNode(n)) continue; // .g, .h, and .previous get assigned implicitly when // calculating neighbors. n.g is nothing more than // current.g+1 ; // add to the open list if (!open.containsNode(n)) { // slot into the list, in priority order (minimum f first) open.priorityPush(n); n.previous = current; } } if (stop) { callback(null); return; } setTimeout(iteration, 1); }; // kick off the first iteration iteration(); return null; }

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Apache server still running but user can not connect website, after "sudo apachectl restart" user can connect website, what'r wrong? [on hold]

    - by Tinyfool
    My website is http://ourcoders.com/, recently I found sometime user report can not connect to my website, but I ssh to server, I found Apache still running, like this: root@AY1401261057077842eaZ:~# ps aux|grep apache root 873 0.0 1.3 290496 13528 ? Ss Aug18 0:28 /usr/sbin/apache2 -k start www-data 3490 0.0 1.8 299004 18764 ? S Aug21 0:01 /usr/sbin/apache2 -k start www-data 3612 0.0 1.5 296008 15540 ? S Aug21 0:03 /usr/sbin/apache2 -k start www-data 3860 0.0 1.5 296636 16268 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3913 0.0 1.2 295468 13084 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3931 0.0 1.7 298488 18228 ? S 16:02 0:01 /usr/sbin/apache2 -k start www-data 3938 0.0 1.9 299128 19724 ? S 16:02 0:02 /usr/sbin/apache2 -k start www-data 4465 0.0 1.6 296688 16404 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 5075 0.0 1.2 295468 13044 ? S 16:16 0:00 /usr/sbin/apache2 -k start www-data 5153 0.0 1.5 295880 15612 ? S 16:17 0:00 /usr/sbin/apache2 -k start www-data 5770 0.0 1.5 296608 16016 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5773 0.0 1.6 296948 16640 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5816 0.0 1.6 297216 16976 ? S 16:31 0:01 /usr/sbin/apache2 -k start www-data 5918 0.0 1.7 298228 17820 ? S 16:33 0:01 /usr/sbin/apache2 -k start www-data 6023 0.0 1.9 299864 19840 ? S 16:35 0:13 /usr/sbin/apache2 -k start www-data 6073 0.0 1.7 298480 18120 ? S 16:36 0:02 /usr/sbin/apache2 -k start www-data 6088 0.0 2.0 300488 21008 ? S 16:36 0:12 /usr/sbin/apache2 -k start www-data 6114 0.0 1.7 298548 18268 ? S 16:37 0:12 /usr/sbin/apache2 -k start www-data 6134 0.0 1.6 296688 16532 ? S 16:37 0:04 /usr/sbin/apache2 -k start www-data 6193 0.0 1.7 297908 17420 ? S 16:38 0:08 /usr/sbin/apache2 -k start www-data 6821 0.0 1.8 299556 19072 ? S 16:43 0:11 /usr/sbin/apache2 -k start www-data 7058 0.0 1.7 298676 18204 ? S 16:48 0:10 /usr/sbin/apache2 -k start www-data 7065 0.0 1.8 299028 18868 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7084 0.0 1.8 299508 19020 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7221 0.0 1.8 299160 18768 ? S 16:51 0:09 /usr/sbin/apache2 -k start www-data 11453 0.0 1.7 298484 18256 ? S 09:39 0:02 /usr/sbin/apache2 -k start root 26324 0.0 0.0 8084 920 pts/0 S+ 22:52 0:00 grep --color=auto apache root 28517 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28518 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28519 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28520 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28521 0.0 0.0 4312 552 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28522 0.0 0.0 4308 548 ? S Aug21 0:07 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28523 0.0 0.0 4176 352 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28524 0.0 0.0 4180 356 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log Today's only error log is blow. [Sat Aug 23 22:52:47 2014] [notice] SIGHUP received. Attempting to restart [Sat Aug 23 22:52:47 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.13 with Suhosin-Patch configured -- resuming normal operations traffic information: cat access-2014-08-23.log | cut -d " " -f4 |cut -d":" -f2 |sort|uniq -c |sort -nr 5692 14 5291 15 5083 16 4723 23 4463 12 4057 17 4011 11 3926 13 3852 10 3187 05 3176 09 3055 06 2790 07 2672 00 2608 02 2591 01 2577 04 2514 03 2497 08 707 22 88 18 After I use "sudo apachectl restart", user can connect my website. So I want to know? What is the problem? And if "sudo apachectl restart" is needed, can I automate run this command? Today this kind struts appear again, and I run netstat -a -n Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 115.28.146.116:80 125.39.208.120:50708 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:50278 SYN_RECV tcp 0 0 115.28.146.116:80 220.173.142.152:23320 SYN_RECV tcp 0 0 115.28.146.116:80 60.173.247.132:52851 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:39397 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:56894 SYN_RECV tcp 0 0 115.28.146.116:80 183.129.174.2:21291 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:44499 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:34017 SYN_RECV tcp 0 0 115.28.146.116:80 124.65.50.210:3774 SYN_RECV tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:15770 0.0.0.0:* LISTEN tcp 1 0 115.28.146.116:80 14.127.65.219:61633 CLOSE_WAIT tcp 305 0 115.28.146.116:80 125.39.208.120:37593 ESTABLISHED tcp 0 0 10.144.142.201:52866 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52873 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52868 10.146.6.61:3306 TIME_WAIT tcp 343 0 115.28.146.116:80 182.118.20.215:50709 ESTABLISHED tcp 0 0 115.28.146.116:54784 173.194.127.243:80 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:41253 CLOSE_WAIT tcp 0 0 10.144.142.201:52876 10.146.6.61:3306 ESTABLISHED tcp 559 0 115.28.146.116:80 218.241.144.114:54501 ESTABLISHED tcp 376 0 115.28.146.116:80 116.213.196.119:50604 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59339 CLOSE_WAIT tcp 214 0 115.28.146.116:80 142.4.215.40:34443 ESTABLISHED tcp 0 0 115.28.146.116:48635 115.28.146.116:80 ESTABLISHED tcp 187 0 115.28.146.116:80 115.28.146.116:48635 ESTABLISHED tcp 0 0 10.144.142.201:52853 10.146.6.61:3306 TIME_WAIT tcp 594 0 115.28.146.116:80 183.129.174.2:7090 CLOSE_WAIT tcp 0 0 10.144.142.201:52874 10.146.6.61:3306 TIME_WAIT tcp 0 0 115.28.146.116:80 182.118.20.166:44081 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59028 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61665 CLOSE_WAIT tcp 0 0 10.144.142.201:52860 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:46983 10.146.6.61:3306 ESTABLISHED tcp 0 2290 115.28.146.116:80 14.154.179.243:41049 FIN_WAIT1 tcp 0 0 10.144.142.201:42900 10.146.6.61:3306 ESTABLISHED tcp 571 0 115.28.146.116:80 220.173.142.152:23295 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59337 CLOSE_WAIT tcp 438 0 115.28.146.116:80 42.120.74.202:31567 CLOSE_WAIT tcp 0 0 115.28.146.116:80 113.36.238.28:59498 ESTABLISHED tcp 259 0 115.28.146.116:80 66.249.65.56:36739 ESTABLISHED tcp 0 0 115.28.146.116:80 113.36.238.28:59341 ESTABLISHED tcp 0 0 115.28.146.116:80 142.4.215.40:34267 FIN_WAIT2 tcp 799 0 115.28.146.116:80 180.173.88.1:52779 ESTABLISHED tcp 0 0 115.28.146.116:80 117.136.25.132:25207 FIN_WAIT2 tcp 0 0 115.28.146.116:80 220.181.108.186:42540 TIME_WAIT tcp 0 0 10.144.142.201:59902 10.242.174.13:80 TIME_WAIT tcp 0 1820 115.28.146.116:80 218.22.140.90:39266 LAST_ACK tcp 0 0 115.28.146.116:80 66.249.65.64:56977 TIME_WAIT tcp 669 0 115.28.146.116:80 83.251.90.61:49664 ESTABLISHED tcp 0 0 10.144.142.201:52872 10.146.6.61:3306 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:43398 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:25739 ESTABLISHED tcp 378 0 115.28.146.116:80 148.251.124.173:39313 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61697 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52986 CLOSE_WAIT tcp 769 0 115.28.146.116:80 14.127.65.219:61537 ESTABLISHED tcp 0 0 10.144.142.201:52859 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:55734 10.164.2.163:9200 TIME_WAIT tcp 563 0 115.28.146.116:80 202.55.20.10:22577 CLOSE_WAIT tcp 194 0 115.28.146.116:80 37.58.100.165:50908 CLOSE_WAIT tcp 791 0 115.28.146.116:80 116.192.2.185:45628 ESTABLISHED tcp 709 0 115.28.146.116:80 113.116.61.178:65209 ESTABLISHED tcp 706 0 115.28.146.116:80 183.227.44.237:54519 ESTABLISHED tcp 301 0 115.28.146.116:80 118.198.243.127:31180 ESTABLISHED tcp 0 0 10.144.142.201:55721 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55726 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55723 10.164.2.163:9200 TIME_WAIT tcp 681 0 115.28.146.116:80 83.251.90.61:49662 ESTABLISHED tcp 0 0 115.28.146.116:80 83.251.90.61:65274 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59022 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52781 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59037 CLOSE_WAIT tcp 0 0 10.144.142.201:55728 10.164.2.163:9200 TIME_WAIT tcp 231 0 115.28.146.116:37596 110.75.102.62:80 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61569 CLOSE_WAIT tcp 0 0 10.144.142.201:51310 10.146.6.61:3306 ESTABLISHED tcp 299 0 115.28.146.116:80 123.125.71.16:36281 ESTABLISHED tcp 0 0 115.28.146.116:48620 115.28.146.116:80 ESTABLISHED tcp 1 0 115.28.146.116:80 183.227.44.237:54520 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59026 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:5490 ESTABLISHED tcp 665 0 115.28.146.116:80 83.251.90.61:49663 ESTABLISHED tcp 0 0 115.28.146.116:53744 173.194.127.147:80 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59023 CLOSE_WAIT tcp 0 0 115.28.146.116:22 116.192.2.185:34205 ESTABLISHED tcp 333 0 115.28.146.116:80 149.174.113.111:54338 CLOSE_WAIT tcp 0 0 10.144.142.201:52861 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52863 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:43272 CLOSE_WAIT tcp 767 0 115.28.146.116:80 49.4.158.2:52947 CLOSE_WAIT tcp 668 0 115.28.146.116:80 83.251.90.61:49665 ESTABLISHED tcp 642 0 115.28.146.116:80 222.78.185.50:55788 ESTABLISHED tcp 710 0 115.28.146.116:80 113.116.61.178:65264 ESTABLISHED tcp 284 0 115.28.146.116:80 157.55.39.243:65185 ESTABLISHED tcp 450 0 115.28.146.116:80 65.49.44.149:55496 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:36629 CLOSE_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42424 CLOSE_WAIT tcp 187 0 115.28.146.116:80 115.28.146.116:48620 ESTABLISHED tcp 1 0 115.28.146.116:80 14.127.65.219:61601 CLOSE_WAIT tcp 776 0 115.28.146.116:80 202.118.253.102:64883 CLOSE_WAIT tcp 841 0 115.28.146.116:80 37.228.105.28:49472 ESTABLISHED tcp 787 0 115.28.146.116:80 112.65.226.198:52192 ESTABLISHED tcp 0 0 10.144.142.201:55717 10.164.2.163:9200 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42855 CLOSE_WAIT tcp 379 0 115.28.146.116:80 101.226.166.219:2322 ESTABLISHED tcp 0 0 115.28.146.116:80 183.60.212.152:43063 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52780 CLOSE_WAIT tcp 784 0 115.28.146.116:80 101.95.29.26:63094 ESTABLISHED tcp 463 0 115.28.146.116:80 65.49.44.149:53876 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:37946 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:41157 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59036 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52984 CLOSE_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:38100 CLOSE_WAIT tcp 0 0 10.144.142.201:52865 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59027 CLOSE_WAIT tcp 0 0 115.28.146.116:36508 173.194.127.81:80 ESTABLISHED tcp 210 0 115.28.146.116:80 188.143.232.123:47775 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59025 CLOSE_WAIT tcp 0 0 10.144.142.201:52857 10.146.6.61:3306 TIME_WAIT tcp 654 0 115.28.146.116:80 49.4.158.2:52985 ESTABLISHED tcp 0 0 115.28.146.116:58627 110.75.102.62:80 ESTABLISHED tcp 782 0 115.28.146.116:80 180.153.219.13:40293 ESTABLISHED tcp 792 0 115.28.146.116:80 116.192.2.185:48187 CLOSE_WAIT tcp6 0 0 :::22 :::* LISTEN udp 0 0 115.28.146.116:123 0.0.0.0:* udp 0 0 10.144.142.201:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 8447 /var/run/mysqld/mysqld.sock unix 2 [ ACC ] SEQPACKET LISTENING 6678 /run/udev/control unix 2 [ ACC ] STREAM LISTENING 6482 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 7543 /var/run/dbus/system_bus_socket unix 7 [ ] DGRAM 7551 /dev/log unix 2 [ ACC ] STREAM LISTENING 7650 /var/run/nscd/socket unix 2 [ ] DGRAM 7156424 unix 3 [ ] STREAM CONNECTED 7156137 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7156136 unix 2 [ ] DGRAM 7156135 unix 2 [ ] DGRAM 7155834 unix 2 [ ] DGRAM 9734 unix 3 [ ] STREAM CONNECTED 9151 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9150 unix 3 [ ] STREAM CONNECTED 9136 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9135 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 2 [ ] DGRAM 9073 unix 3 [ ] STREAM CONNECTED 7575 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7574 unix 3 [ ] STREAM CONNECTED 7565 unix 3 [ ] STREAM CONNECTED 7564 unix 3 [ ] STREAM CONNECTED 7332 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 7330 unix 3 [ ] DGRAM 6712 unix 3 [ ] DGRAM 6711 unix 3 [ ] STREAM CONNECTED 6662 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 6635

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >