Search Results

Search found 5189 results on 208 pages for 'foo wei tau'.

Page 163/208 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • puppet propagate variable from node to erb tamplate?

    - by picca
    Is it possible to declare variable in node and than propage it way down to the erb template? Example: node basenode { $myvar = "bar" # default include myclass } node mynode extends basenode { $myvar = "foo" } class myclass { file { "/root/myfile": content => template("myclass/mytemplate.erb") ensure => present, } } Source of mytemplate.erb: myvar has value: <%= myvar %> I know that my example might be complicated. But I'm trying to propagate file on (almost) all my nodes and I want its content to be altered depending on the node which requests the file. The $myvar = "bar" statement should be default when node does not override its value. Is there a solution to my problem? I'm using puppet 0.24.5

    Read the article

  • Element.appendChild() hosed in IE .. workaround? (related to innerText vs textContent)

    - by Rowe Morehouse
    I've heard that using el.innerText||el.textContent can yield unreliable cross-browswer results, so I'm walking the DOM tree to collect text nodes recursively, and write them into tags in the HTML body. What this script does is read hash substring valus from the window.location and write them into the HTML. This script is working for me in Chrome & Firefox, but choking in IE. I call the page with an URL syntax like this: http://example.com/pagename.html#dyntext=FOO&dynterm=BAR&dynimage=FRED UPDATE UPDATE UPDATE Solution: I moved the scripts to before </body> (where they should have been) then removed console.log(sPageURL); and now it's working in Chrome, Firefox, IE8 and IE9. This my workaround for the innerText vs textContent crossbrowser issue when you are just placing text rather than getting text. In this case, getting hash substring values from the window.location and writing them into the page. <html> <body> <span id="dyntext-span" style="font-weight: bold;"></span><br /> <span id="dynterm-span" style="font-style: italic;"></span><br /> <span id="dynimage-span" style="text-decoration: underline;"></span><br /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8/jquery.min.js"></script> <script> $(document).ready(function() { var tags = ["dyntext", "dynterm", "dynimage"]; for (var i = 0; i < tags.length; ++i) { var param = GetURLParameter(tags[i]); if (param) { var dyntext = GetURLParameter('dyntext'); var dynterm = GetURLParameter('dynterm'); var dynimage = GetURLParameter('dynimage'); } } var elem = document.getElementById("dyntext-span"); var text = document.createTextNode(dyntext); elem.appendChild(text); var elem = document.getElementById("dynterm-span"); var text = document.createTextNode(dynterm); elem.appendChild(text); var elem = document.getElementById("dynimage-span"); var text = document.createTextNode(dynimage); elem.appendChild(text); }); function GetURLParameter(sParam) { var sPageURL = window.location.hash.substring(1); var sURLVariables = sPageURL.split('&'); for (var i = 0; i < sURLVariables.length; i++) { var sParameterName = sURLVariables[i].split('='); if (sParameterName[0] == sParam) { return sParameterName[1]; } } } </script> </body> </html> FINAL UPDATE If your hash substring values require spaces (like a linguistic phrase with three words, for example) then separate the words with the + character in your URI, and replace the unicode \u002B character with a space when you create each text node, like this: var elem = document.getElementById("dyntext-span"); var text = document.createTextNode(dyntext.replace(/\u002B/g, " ")); elem.appendChild(text); var elem = document.getElementById("dynterm-span"); var text = document.createTextNode(dynterm.replace(/\u002B/g, " ")); elem.appendChild(text); var elem = document.getElementById("dynimage-span"); var text = document.createTextNode(dynimage.replace(/\u002B/g, " ")); elem.appendChild(text); Now form your URI like this: http://example.com/pagename.html#dyntext=FOO+MAN+CHU&dynterm=BAR+HOPPING&dynimage=FRED+IS+DEAD

    Read the article

  • search nfs network volume from mac client

    - by user1440190
    Asked: how does a Mac OSX SL or Lion user search the cluster for a particular file (foo.txt) "From the cluster, you would need to run some form of recursive lookup for the file desired. As an example, using 'find'. RAM-1# find /ifs |grep test.txt /ifs/Elements/avid2test.txt /ifs/Elements/test.txt I would suggest contacting Apple support regarding their recommendation for searching for files on remote file systems from the Mac client itself" OK that's great, but I don't want users using CLI !!! Anyone know a good method (non-CLI)? Spotlight is not an option. BTW cluster is roughly 80TB

    Read the article

  • How can I rewrite a URL and pass on the original URL as a parameter?

    - by Bobby Jack
    I'm building a site that needs to include a 'check' procedure, to do several initiation tasks for a user's session. Examples include checking whether they're accepting cookies, determining if their IP address grants them specific privileges, etc. Once the check is complete, I need to redirect the user back to the page they originally requested. The plan is to use RewriteCond and map all URLs to an 'initiator' if the user doesn't have a specific cookie set. Let's say I want to rewrite all URLs (ultimately, with some conditions, of course) to: /foo?original_url=... Where the ... is the original URL requested, URL-encoded. The closest I've got is this: RewriteRule ^(.*)$ http://localhost/php/cookie.php$1 [R=301] I can then inspect the original URL, captured in the backreference, via PATH_INFO. However, this is pretty messy - I would much prefer to pass that value as a URL parameter

    Read the article

  • Elevate the weight of browsing history in Google Chrome's autocomplete

    - by maayank
    Google Chrome has the feature of auto-completing web addresses while you type them in the address bar. Alas, it gives absurdly more weight to Google's own auto-suggest v.s. my own browsing history, which seems a bit foolish - if I regularly (i.e. twice a week) check a certain website with the keywords "foo bar ponies" in its url, it is reasonable to expect that I will want to visit that site again and not other sites. While a bit subjective, to the very least I would expect such URLs to be in the list Chrome suggests, even if not at the top. Is there some plugin/secret option that alters the default behavior?

    Read the article

  • Netcat UDP File Transfer Between Two Servers Times Out?

    - by Mark Bowytz
    I'm testing file transfer speeds between two Red Hat servers that are connected to the same switch within the data center and I decided to use netcat to eliminate protocol overhead as much as possible. Testing in TCP mode went well and I was wondering how UDP might fare. On my receiving (client) end, I ran this: nc -u -l 11225 -v > myfile.out And then on the sending (server) end I ran the following: cat myfile.out | nc -u myserver.foo.zzz.com 11225 -v The file I'm testing with is 38 GB but the transfer seems to stop at around 15 GB (one time at 14.9, another at 15.6). I've tested by adding a "-w 5000" just in case it's timing out but no joy. Adding the -v doesn't show anything except acknowledging that the connection occurred. No errors. So - any suggestions as to why would the transfer cease?

    Read the article

  • Access port on machine by connecting to other machine via SSH?

    - by piquadrat
    I have to access my home router's web interface on port 80. Unfortunately, the only way into the network I have at the moment is SSH to another machine on the same network. me ---|---SSH Box----Home Router My Google foo seems to have abandoned me, I couldn't didn't find anything helpful. Any ideas? Thanks! To clarify: I'm not at home right now. I do however have access to one machine on the network (a QNAP NAS) over SSH. I need to access the home router web interface on port 80 from my notebook which is outside of the home network.

    Read the article

  • Highlighting subroutines in Notepad++

    - by predatflaps
    I would like to highlight the contents of IF statements in a slightly different colour from the background, so that I can see them more easily. Is it possible in Notepad++? it would be amazing to highlight all the nested subroutines in a function in slightly different light/dark colours depending on the scheme so that you can straightaway see the commands at a glance without spying out the curly brackets. not psychedelic colours, just slightly visible background colour difference. wouldn't it be great? Function Foo(){ highlight one color if(){highlight color2 for(){highlight color3 if (){hilghlight color4 } } } } }

    Read the article

  • Setting Up Customer-Specific Domains

    - by GregT
    I can go to Fog Creek's web site, setup a new account, and they will instantly assign me a URL such as 'mycompany.fogbugz.com' (where 'mycompany' is something I make up, as opposed to some value assigned by Fog Creek). I can do the same type of thing with Beanstalk and many other vendors. I have been Googling around trying to figure out exactly how this works. 1: In the above example, is 'mycompany.fogbugz.com' set up in DNS in some special way other than how one would setup a vanilla 'www.foo.com' domain? 2: Assuming Fog Creek uses Tomcat (which I am sure is NOT true, but pretend it is) would they be likely to have created a tomcat/webapps/mycompany subdirectory on their server? Or is there some simpler way to handle this? I'm obviously not a DNS or TC wizard. Any insight appreciated. Happy New Year!

    Read the article

  • How to run a process and completely detach it of its parent shell

    - by Bicou
    I'm running a program on a linux server that will take days to complete. I'm launching it from my workstation from an SSH terminal, as this program is command-line only. I want to be able to do all of these : launch that program, redirect standard outputs to files, exit my SSH session without making this terminate the process. I thought about $ ./MyProg.csh -params -foo -bar </dev/null 1>~/out.log 2>~/err.log & However, the process is terminated the moment I close my SSH session. My workstation is running Windows XP, and I cannot guarantee its uptime over several days, which is required for the processing of my data on the Linux server. As you may have noted, my program requires to be launched from CSH. Is it possible to do this ? Thanks.

    Read the article

  • How can I get rsync to ignore missing files?

    - by Joe Casadonte
    I'm executing a command like the following to several different systems: $ rsync -a -v [email protected]:'/path/to/first/*.log path/to/second.txt' /dest/folder/0007/. Sometimes *.log does not exist, and that's OK, but rsync generates the following error: receiving file list ... rsync: link_stat "/path/to/first/*.log" failed: No such file or directory (2) done Is there any way to suppress that? The only way I can think of is to use include and exclude filters, which just seem a PITA to me. Thanks!

    Read the article

  • group write permission ignored in ubuntu

    - by NorthPole
    Its probably my stupidy here but i'm stuck on this and would appreciate the help. I want my user to have full access to the local apache root folder, and i also want the apache to have full access to the same folder. What i did was create a new group called DevGroup and i added my username and www-data there. also i changed the permissions to 770 to allow full group access but now it wont allow me or the apache any kind of access to the folder. here is what i get with ls drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder which seems perfect but when i try as a user to access the file i get this var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied also when i try to access the a page in the folder from browser [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0

    Read the article

  • split a textfile after each n matches to a new file using sed or awk

    - by ozz
    i tried to split a file in parts of n matches each. The file is just one line and the seperator is '<br>' foo<br>bar<br>.....<br> I just want to split the file in parts, where each file has 100 datasets (text plus <br>)( normaly 100 datasets, but at the end maybe less) I already played around with this ... split-file-in-2-with-sed and this split-one-file-into-multiple-files-based-on-pattern sed.exe -e "^.*.<br>{0,100}/g" < original.txt > first_half.txt The split do not work an the result is only 1 file instead of many.

    Read the article

  • Chef command to create new ec2 instance with second ebs volume attached and mounted instead of the default ephemeral volume?

    - by runamok
    We currently use this command to create a new ec2 instance with chef: knife ec2 server create --node-name=prod-apache-1 --availability-zone us-east-1c --image ami-3d4ff254 --distro ubuntu12.04-gems --groups "default" --ssh-key foo --identity-file ~/.ssh/id_rsa --ssh-user ubuntu --flavor m1.small After this command we then run further chef commands to finish provisioning the server. I was wondering if it would be possible while first setting up the instance I wanted a 100 gb volume created and mounted at /mnt and to have the ephemeral storage mounted at /tmp or /mnt-ephemeral instead. If not what further commands in chef would you advise running? I know how to do this via the aws console and can probably figure out how to do it via the ec2 command line tools but I am knew to chef and a bit overwhelmed.

    Read the article

  • Mailer Daemon greeting failed

    - by Xelluloid
    I wrote a tool that sends automated mails to a couple of addresses. This worked for a couple of weeks. Now since yesterday I get Mailer-Daemon responses like this Hi. This is the qmail-send program at test.test2.net. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. testuser@domain.com: Connected to 123.456.789.10 but greeting failed. Remote host said: 554 foo.bar.com I'm not going to try again; this message has been in the queue too long. Does someone have an idea what I can do now?

    Read the article

  • Control XML serialization of Dictionary<K, T>

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <FooDictionary xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Help updating cron entry using regular expressions

    - by Uday
    hi I am trying to update a cron entry NOT by using crontab -e but by shell commands. For example the cron entry is like this: 10 * * * * /home/localuser/foo.sh -b 1 -h 4 > foo_output.sh 2>&1 No i need edit the command line parameters part ONLY i.e -b 1 -h 4 to something else which will be coming in from the user. First thing would be to write the crontab to a tmp file and then manipulate that temp file. Now, is there an easy way to edit that line using SED or something? The crude way wud be to delete that entire line, write a new line with the entire expression and then load that into the cron. I am not very good with regular expressions. My system supports sed -i so was thinking this could be done in a single line command. Thanks in advance

    Read the article

  • Apache redirect alias to a different domain

    - by John Magnolia
    I previous had both Web and Mail on the same server and for each of my vhosts/domains, I could visit example.com/mail or foo.com/mail which would display the Roundcube Webmail across all vhosts. E.g Alias /mail "/usr/share/apache2/roundcub/" Although now I have moved the Mail server onto a completely different server and now have a SSL for the main domain. https://mail.example.com which is now the new location of Roundcube for all vhosts/domains. Question: is it possible to redirect all alias for "/mail" from the Web server to the new URL?

    Read the article

  • How to choose NoSQL database engine?

    - by Poma
    We have a database with following specs: 30k records, 7mb in size 20 inserts/second 1000 updates/second 1000 range selects/second, by secondary index, approx 10 rows each needs at least one secondary index needs some mechanism to expire keys if they are not updated for 75 secs (can be done via programmatic garbage collector but will require additional 'last_update' index and will add some load) consistency is not required durability is not required db should be stored in memory For now we use Redis, but it does not have secondary index and it's keys index:foo:* is too slow. Membase also does not have secondary index (as far as I know). MongoDB and MySQL memory engine have table-level locks. What engine will fit our use case?

    Read the article

  • Control XML serialization of generic types

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <XmlProcessList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Right solution for /etc/hosts file reset on reboot

    - by user846226
    i've just installed funtoo and after setting the FQDN on /etc/conf.d/hostname i noticed when setting a list of aliases in /etc/hosts file it get overwtiten on each reboot. Someone points to set the aliases to 127.0.0.2 ip address but that's not a valid solution for me. Could someone point me to the file where i should place entries like 127.0.0.1 local.foo 127.0.0.1 local.bar in order to make them persist in /etc/hosts after rebooting? Thanks! PD: I think openresolv could be the one who is overwritting the file.

    Read the article

  • How to change the setting for a network device reported by ethtool, specifically Speed, on VM?

    - by Ramadheer Singh
    This is related to these two questions, although they don't answer my question. The machines are RHEL6. 1.ethtool not showing all the properties 2.changing network speed to 1000Mb/s Output on VM: [root@foo ~]# ethtool eth0 Settings for eth0: Current message level: 0x00000007 (7) Link detected: yes Output on Real Hardware: (interested in Speed) # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes ***Speed: 1000Mb/s*** Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Link detected: yes if there's anyway I can set this in VM, please suggest.

    Read the article

  • Viewing zip archive contents using 'less' on OS X.

    - by multihead
    I couldn't help but notice that the 'less' program on all of the recent distributions of Linux that I've used (Ubuntu and Gentoo in this case) allow me to view the contents of ZIP and TAR archives, while the install of 'less' that I have on OS X (and Solaris) instead produce a "foo.zip may be a binary file. See it anyway?", which proceeds to spit out the raw binary data instead of a nice file structure listing. Google has not produced much in the way of helpful results -- it's tricky to search for 'less' in this context. I downloaded and built the latest version from greenwoodsoftware.com, but even it refuses to show the contents of these archives. I didn't come across any related configure/build options either. Any ideas? Thanks!

    Read the article

  • Open file without specifying exact location

    - by person
    Say I have a file in some obscure directory that I want to open and edit. I don't want to do something like this... vim ~/foo/bar/blah/doh/ugh.txt I'd rather be able to say find this file and open it. I know there are commands like locate and find to find a file or directory, but I'm not sure whether these can (or even should) be utilized in what I'm trying to do. Basically, what is the simplest way to open a file with a program w/o specifying its exact location? (In cases where there isn't another file with the same name in the entire system, and cases where there are multiple).

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >