Search Results

Search found 20464 results on 819 pages for 'css classes'.

Page 357/819 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • Xcode/iPhone Development 6 months in - Annoyances

    - by clearbrian
    Hi I've been iPhone programming for 6 months and come from a PC/Java/Eclipse background and still have a few annoyances with Xcode/iPhone programming I wonder are there any shortcuts to. Is there any way to prevent multiple windows opening all the time in XCode? a) When you click on the Errors/Warnings in the bottom right of the status bar build errors are shown in separate window. Any way to get these to show in the main editor? b) Anyway to get debugger to appear in main editor. I have a big screen iMac and it's still window hell on Macs. When you come from Alt-Tab the Mac is a nightmare. 2) Anyway to get a toolbar item on the main editor to: a) Open Console (I know CMD-thingy-R) b) Open Break points (you have to open Debugger first then breakpoints) I know there's keyboard shortcuts but I have only left hand free others on the trackball so any keys on right hand side of keyboard are too far. I know you can add Finder toolbar scripts (just wondering if anyway to extend Xcode). Are there utilities to extend Xcode? Scripts/Automator/Any Services I can setup to help. Can you automate Xcode like you can with Windows/ActiveX/VBA 3) Limit lookups using CMD + double click. If I double click on a variable to find its definition using CMD + double click it shows every occurrence of all variables with that name. (annoying it you name all you maps mapView) Anyway to get it to limit to the current class or at least order so current class is first. 4) Find doesn't seem to loop backwards if result all above cursor position I'm in a class and I hit CMD + F for find. Find box appears. I enter some text hit return. It says I have x matches but only back arrow is highlight in Find But when I hit < it does nothing. I need to scroll to the top and redo the search. If the text is both forwards and backwards then both < are highlighted and it works. is this a bug or a 'feature' Missing Eclipse features I have been looking at the User Script menu but was wondering how powerful they are? 5) any scripts around to generate source from members such as description: @property @synthesize if I add a new member, run a script will generate @property/@syntesize and release in dealloc 7) any good sites for scripts? SCM Im having problems with SCM and Folders on HD under project Classes directory. You get a library e.g. JSON. It usually comes as a folder. You copy it to the /Classes for your project. /Classes/JSON I create a Group for the Library in Xcode under Classes group. Classes JSON I drag the files from the folder into xcode into the JSON Group. I add them to the SCM and icon changes from ? to A but if I try and commit them it say folder /JSON is not under SCM. Can you drag a folder into Xcode so that it AND its files get included in SCM? Anyway to stop Xcode Help from being on top all the time. I keep feeling like punching it and telling it to get out of the way! :) I dont mind it open just not in the way once I've finished. Yes I know I can Ctrl-W Sites: the main site I use to learn Obj-C are : stackoverflow.com Google code Search - tonnes of full apps on here http://www.iphonedevsdk.com/forum/iphone-sdk-development/ Apple Developers Forums (anyway to get RSS feed to these or is that blasphemy :) ) Safari - 100s of IT book though prob too many to keep up :) any others? Any site that gives simple examples for Obj-C/ UIKit The docs just show the methods but actual examples (Google code search has helped a lot here)

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • Making a login Using JavaScript, Will Incorporate PHP Later

    - by TIMOTHY
    not sure why my code wont work, im teaching myself javascript i know php moderatly and i also know the intelligence of using java to hold a password and username, but at the moment i just want the script to work. <html> <head> <title>34webs</title> <link rel="stylesheet" href="css/lightbox.css" type="text/css" media="screen" /> <link rel="stylesheet" type="text/css" media="screen" href="main.css"> <script type="text/javascript" src="js/prototype.js"></script> <script type="text/javascript" src="js/scriptaculous.js?load=effects,builder"></script> <script type="text/javascript" src="js/lightbox.js"></script> <script type="javascript" > function logintry (){ var usern = document.logn.username.value; var passw = document.logn.password.value; if(usern == blue && passw == bluee){ alert('password is correct!'); }else{ alert('password is wrong'); } } </script> </head> <body> <div id="bod"> <div id="nav"> <p id="buttonhead">34 web</p> <a href="#" class="button">HOME</a> <a href="#" class="button">NEWS</a> <a href="#" class="button">DOWNLOADS</a> <a href="#" class="button">ERA</a> <a href="#" class="button">IE BROWSER</a> <a href="#" class="button">DRIVERS</a> <form name="logn"> <table style="margin:0 auto; background:#0174DF; color:white;"> <tr> <td> Username<br> <input type="text" name="username" value=""> </td> </tr> <tr> <td> Password<br> <input type="password" name="password" value=""> </td> </tr> <tr> <td style="text-align:center;"> <input type="button" name="Submit" value="submit" onclick= javascript: logintry()> </td> </tr> </table> </form> </div>

    Read the article

  • Slider Position

    - by RobinHood
    I had used this code for to create a slider. It's working fine. I want to place the slider next to my label control. Like this (Slidder : Slidder Bar) How to do that? <div> <tr> <td class="aln-rht"> <asp:Label ID="Literal4" runat="server" Text="Slidder :" Font-Bold="true" /> </td> <td> <%=Html.DropDownList("Scale", null, "--Select Scale--", new { id = "speed1" , Style = "display:none"})%> </td> </tr> </div> <script type="text/javascript"> $(function () { var abc = $('select#speed1').selectToUISlider().next(); fixToolTipColor(); }); function fixToolTipColor() { $('.ui-tooltip-pointer-down-inner').each(function () { var bWidth = $('.ui-tooltip-pointer-down-inner').css('borderTopWidth'); var bColor = $(this).parents('.ui-slider-tooltip').css('backgroundColor') $(this).css('border-top', bWidth + ' Solid ' + bColor); }); } </script> The ui.slider.extras.css, selectToUISlider.JQuery.js and ui.slider.js files I had used This is JQuery-ui-1.7.1.custom.css, which i am using... .ui-slider { position: relative; text-align: left; } .ui-slider .ui-slider-handle { position: absolute; z-index: 2; width: 1.2em; height: 1.2em; cursor: default; border-color: Maroon; background-color: Olive} .ui-slider .ui-slider-range { position: absolute; z-index: 1; font-size: .7em; display: block; border: 0; } .ui-slider-horizontal { height: .8em; border-color: Olive;} .ui-slider-horizontal .ui-slider-handle { top: -.3em; margin-left: -.6em; } .ui-slider-horizontal .ui-slider-range { top: 0; height: 100%; } .ui-slider-horizontal .ui-slider-range-min { left: 0; } .ui-slider-horizontal .ui-slider-range-max { right: 0; } .ui-slider-vertical { width: .8em; height: 100px; border-color: Olive; } .ui-slider-vertical .ui-slider-handle { left: -.3em; margin-left: 0; margin-bottom: -.6em; } .ui-slider-vertical .ui-slider-range { left: 0; width: 100%; } .ui-slider-vertical .ui-slider-range-min { bottom: 0; } .ui-slider-vertical .ui-slider-range-max { top: 0; }/* Tabs

    Read the article

  • jquery drag and drop script and problem in reading json array

    - by Mac Taylor
    i made a script , exactly like wordpress widgets page and u can drag and drop objects this is my jquery script : <script type="text/javascript" >$(function(){ $('.widget') .each(function(){ $(this).hover(function(){ $(this).find('h4').addClass('collapse'); }, function(){ $(this).find('h4').removeClass('collapse'); }) .find('h4').hover(function(){ $(this).find('.in-widget-title').css('visibility', 'visible'); }, function(){ $(this).find('.in-widget-title').css('visibility', 'hidden'); }) .click(function(){ $(this).siblings('.widget-inside').toggle(); //Save state on change of collapse state of panel updateWidgetData(); }) .end() .find('.in-widget-title').css('visibility', 'hidden'); }); $('.column').sortable({ connectWith: '.column', handle: 'h4', cursor: 'move', placeholder: 'placeholder', forcePlaceholderSize: true, opacity: 0.4, start: function(event, ui){ //Firefox, Safari/Chrome fire click event after drag is complete, fix for that if($.browser.mozilla || $.browser.safari) $(ui.item).find('.widget-inside').toggle(); }, stop: function(event, ui){ ui.item.css({'top':'0','left':'0'}); //Opera fix if(!$.browser.mozilla && !$.browser.safari) updateWidgetData(); } }) .disableSelection(); }); function updateWidgetData(){ var items=[]; $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId }; //Push item object into items array items.push(item); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post("blocks.php"+"&order="+$.toJSON(sortorder), function(data){ $('#console').html(data).fadeIn("slow"); }); } </script> main part is saving object orders in table and this is my php part : function stripslashes_deep($value) { $value = is_array($value) ? array_map('stripslashes_deep', $value) : stripslashes($value); return $value; } $order = $_GET['order']; $order = sql_quote($order); if(empty($order)){ echo "Invalid data"; exit; } global $db,$prefix; if (get_magic_quotes_gpc()) { $_POST = array_map('stripslashes_deep', $_POST); $_GET = array_map('stripslashes_deep', $_GET); $_COOKIE = array_map('stripslashes_deep', $_COOKIE); $_REQUEST = array_map('stripslashes_deep', $_REQUEST); } $data=json_decode($order); foreach($newdata->items as $item) { //Extract column number for panel $col_id=preg_replace('/[^\d\s]/', '', $item->column); //Extract id of the panel $widget_id=preg_replace('/[^\d\s]/', '', $item->id); $sql="UPDATE blocks_tbl SET bposition='$col_id', weight='".$item->order."' WHERE id='".$widget_id."'"; mysql_query($sql) or die('Error updating widget DB'); } print_r($order); now forexample the output is this : items\":[{\"id\":\"item26\",\"collapsed\":1,\"order\":0,\"column\":\"c\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":1,\"column\":\"i\"},{\"id\":\"item1\",\"collapsed\":1,\"order\":2,\"column\":\"i\"},{\"id\":\"item3\",\"collapsed\":1,\"order\":3,\"column\":\"i\"},{\"id\":\"item16\",\"collapsed\":1,\"order\":4,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":5,\"column\":\"i\"},{\"id\":\"item6\",\"collapsed\":1,\"order\":6,\"column\":\"i\"},{\"id\":\"item17\",\"collapsed\":1,\"order\":7,\"column\":\"i\"},{\"id\":\"item19\",\"collapsed\":1,\"order\":8,\"column\":\"i\"},{\"id\":\"item10\",\"collapsed\":1,\"order\":9,\"column\":\"i\"},{\"id\":\"item11\",\"collapsed\":1,\"order\":10,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"l\"},{\"id\":\"item5\",\"collapsed\":1,\"order\":1,\"column\":\"l\"},{\"id\":\"item8\",\"collapsed\":1,\"order\":2,\"column\":\"l\"},{\"id\":\"item13\",\"collapsed\":1,\"order\":3,\"column\":\"l\"},{\"id\":\"item21\",\"collapsed\":1,\"order\":4,\"column\":\"l\"},{\"id\":\"item28\",\"collapsed\":1,\"order\":5,\"column\":\"l\"},{\"id\":\"item7\",\"collapsed\":1,\"order\":0,\"column\":\"r\"},{\"id\":\"item20\",\"collapsed\":1,\"order\":1,\"column\":\"r\"},{\"id\":\"item15\",\"collapsed\":1,\"order\":2,\"column\":\"r\"},{\"id\":\"item18\",\"collapsed\":1,\"order\":3,\"column\":\"r\"},{\"id\":\"item14\",\"collapsed\":1,\"order\":4,\"column\":\"r\"}]} question is how can i find out column_id or order im a little bit confused

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • No external src ip in log files (my router ip appears instead)

    - by bongo_fury
    I recently retired my workhorse WRT54G router/AP in favor of a Linksys EA2700. Since then, all inbound traffic (bound to an Ubuntu 10.02 box running LAMP)logged to Syslog, Apache's error and access logs, etc. (all behind said router) is getting logged with a src ip of 192.168.1.1, that of the router's internal ip. For example, here is an old entry from apache's access.log: 74.82.68.20 - - [22/Feb/2011:10:14:34 -0600] "GET /assets/css/style.css HTTP/1.1" 304 154 "http://example.com/view.php?event_id=1" "BlackBerry8520/5.0.0.822 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/100" And here is one since switching the router: 192.168.1.1 - - [05/Oct/2012:21:29:25 -0500] "GET /somedir/print.css HTTP/1.1" 200 650 "http://example.com/somedir/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"** That first field is the problem. Each and every entry in every log shows an "external" IP of 192.168.1.1, which isn't very helpful. Any ideas? Much thanks from a n00b!

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • Strange issue ! Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new Server with windows server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separated packages. Issue : On my first installation of my application all works great. When I updated some JS and CSS files then I access to my application again from a PC on local network I get the old JS and CSS versions! But when I access to the same application on local server I got the latest versions of those files! Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I never had this kind of issue! I think that there are some cache but I don't know where! Maybe on Win Server 2008 R2 or on VMWare ! The question is : Why when I access to my application on the server all works fine, but when I access to the same application from a local network I have the old version of JS and CSS files?? Any one can help me please?! Regards.

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >