Search Results

Search found 65392 results on 2616 pages for 'http service'.

Page 64/2616 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Resolving IIS7 HTTP Error 500.19 - Internal Server Error

    - by fatherjack
    LiveJournal Tags: RedGate Tools,SQL Server,Tips and Tricks How To The requested page cannot be accessed because the related configuration data for the page is invalid. As part of my work recently I was moving SQL Monitor from the bespoke XSP web server to be hosted on IIS instead. This didn't go smoothly. I was lucky to be helped by Red Gate's support team (http://twitter.com/kickasssupport). I had SQL Monitor installed and working fine on the XSP site but wanted to move to IIS so I reinstalled the software and chose the IIS option. This wasn't possible as IIS wasn't installed on the server. I went to Control Panel, Windows features and installed IIS and then returned to the SQL Monitor installer. Everything went as planned but when I browsed the site I got a huge error with the message "HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid." All links that I could find suggested it was a permissions issue, based on the directory where the config file was stored. I changed this any number of times and also tried the altering its location. Nothing resolved the error. It was only when I was trying the installation again that I read through the details from Red Gate and noted that they referred to ASP settings that I didn't have. Essentially I was seeing this. I had installed IIS using the default settings and that DOESN'T include ASP. When this dawned on me I went back through the windows components installation process and ticked the ASP service within the IIS role. Completing this and going back to the IIS management console I saw something like this; so many more options! When I clicked on the Authentication icon this time I got the option to not only enable Anonymous Authentication but also ASP.NET Impersonation (which is disabled by default). Once I had enabled this the SQL Monitor website worked without error. I think the HTTP Error 500.19 is misleading in this case and at the very least should be able to recognise if the ASP service is installed or not and then to include a hint that it should be. I hope this helps some people and avoids wasting as much of your time as it did mine. Let me know if it helps you.

    Read the article

  • GWB | 30 Posts in 60 Days Update

    - by Staff of Geeks
    One month after the contest started, we definitely have some leaders and one blogger who has reached the mark.  Keep up the good work guys, I have really enjoyed the content being produced by our bloggers. Current Winners: Enrique Lima (37 posts) - http://geekswithblogs.net/enriquelima Almost There: Stuart Brierley (28 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (26 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (23 posts) - http://geekswithblogs.net/iupdateable Coming Along: Liam McLennan (17 posts) - http://geekswithblogs.net/liammclennan Christopher House (13 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (13 posts) - http://geekswithblogs.net/mbcrump Steve Michelotti (10 posts) - http://geekswithblogs.net/michelotti Michael Freidgeim (9 posts) - http://geekswithblogs.net/mnf MarkPearl (9 posts) - http://geekswithblogs.net/MarkPearl Brian Schroer (8 posts) - http://geekswithblogs.net/brians Chris Williams (8 posts) - http://geekswithblogs.net/cwilliams CatherineRussell (7 posts) - http://geekswithblogs.net/CatherineRussell Shawn Cicoria (7 posts) - http://geekswithblogs.net/cicorias Matt Christian (7 posts) - http://geekswithblogs.net/CodeBlog James Michael Hare (7 posts) - http://geekswithblogs.net/BlackRabbitCoder John Blumenauer (7 posts) - http://geekswithblogs.net/jblumenauer Scott Dorman (7 posts) - http://geekswithblogs.net/sdorman   Technorati Tags: Standings,Geekswithblogs,30 in 60

    Read the article

  • .htaccess 301 redirect with regex?

    - by Eddie ZA
    How to do this with regular expression? Old -> New http://www.example.com/1.html -> http://www.example.com/dir/1.html http://www.example.com/2.html -> http://www.example.com/dir/2.html http://www.example.com/3.asp -> http://www.example.com/dir/3.html http://www.example.com/4.asp -> http://www.example.com/dir/4.html http://www.example.com/4_a.html -> http://www.example.com/dir/sub/4-a.html http://www.example.com/4_b.html -> http://www.example.com/dir/sub/4-b.html I've tried this: Redirect 301 /1.html http://www.example.com/dir/1.html Redirect 301 /2.html http://www.example.com/dir/2.html Redirect 301 /3.asp http://www.example.com/dir/3.html Redirect 301 /4.asp http://www.example.com/dir/4.html Redirect 301 /4_a.html http://www.example.com/dir/sub/4-a.html Redirect 301 /4_b.html http://www.example.com/dir/sub/4-b.html

    Read the article

  • myToys.de GmbH announces integration of ZVT payment terminal interface with Oracle Retail Point-of-Service

    - by user801960
    In our latest guest post, Sascha Kraatz, Developer Oracle E-Business Suite of myToys.de announces the development and integration of its ZVT payment terminal interface with the Oracle Retail Point-of-Service solution. myToys.de GmbH, which runs Oracle Retail Point-of-Service (ORPOS) in its 13 retail stores in Germany (see press release), has developed and implemented a Java-based interface for integrating the ZVT payment terminal with ORPOS. Through the combined support of payment service provider, easycash GmbH, and Ingenico GmbH, Germany´s leading payment terminal provider, myToys.de has become the first organisation to create this new automated solution for the Oracle Retail Point-of-Service, which has eliminated input errors that could occur with manual payment terminals and is localised for the German market. Ingo Stober, head of retail business at myToys.de confirms: “With this solution, we can speed up the payment process, reduce manual errors and enhance the customer experience in our stores”. myToys.de GmbH is a member of the Otto Group and one of the leading multichannel retailers for toys and other kids products in Germany. Customers can choose from over 100,000 attractive products, starting with items for expectant mothers or basic baby equipment to items for school children and beyond. In 2006, the first of 13 myToys.de retail branches was opened. If you would like to find out more about this solution, please contact the head of Oracle E-Business Suite Development at myToys.de, Mr. Ralf Schmilewski, or leave a comment below.

    Read the article

  • SharePoint, HTTP Modules, and Page Validation

    - by Damon Armstrong
    Sometimes I really believe that SharePoint actively thwarts my attempts to get it to do what I want.  First you look at something and say, wow, that should work.  Then you realize it doesn’t.  Then you have an epiphany and see a workaround.  And when you almost have that work around working… well then SharePoint says no again.  Then it’s off on another whirl-wind adventure to find a work around for the workaround.  I had one of those issues today, but I think I finally got past the last roadblock. So, I was writing an HTTP module as a workaround for another problem.  Everything looked like it was working great because I had been slowly adding code into the HTTP module bit by bit in a prototyping effort.  Finally I put in the last bit of code in place… and I started to get an error: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” This is not an uncommon error – it normally occurs when you are updating an item on a GET request and you have not marked the web containing the item with AllowUnsafeUpdates.  One issue, however, is that I wasn’t updating anything in my code.  I was, however, getting an SPWeb object so I decided to set the AllowUnsafeUpdates property on it to true for good measure. Once that was in place, I ran it again… “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” WTF?!?!  I really expected that setting the AllowUnsafeUpdates property on the SPWeb would fix the issue, but clearly that was not the case.  I have had occasion to disassemble some SharePoint code with .NET Reflector in the past, and one of the things SharePoint abuses a bit more than it should is the HttpContext.  One way to avoid this abuse is to clear out the HttpContext while your code runs and then set it back once you are done.  I tried this next, and everything worked out just like I had expected.  So, if you are building an HTTP Module for SharePoint and some code that you are running ends up giving you a security validation error, remember to try running that code with AllowUnsafeUpdates turned on and try running the code with the HttpContext nulled out (just remember to set it back after your code runs or else you’ll really jack things up).

    Read the article

  • SQL Azure Service Issues &ndash; 10.27.2012 (Restored Now)

    - by ToStringTheory
    Please note that if you have a Windows Azure website, or use SQL Azure, your site may be experiencing downtime currently.  Notice I just called in regarding one of my public facing internet sites, because the site was failing to load anything but its error page, I couldn’t connect to the database to inspect application error logs, and the Windows Azure Management portal won’t load the SQL Azure extension. After speaking to the representative, he also mentioned that they were also having some problems updating the Service Dashboard which shows service up/down time, and for now, they are posting messages at http://account.windowsazure.com.  Please note that this issue may only be effecting certain regions.  Last, I may have misheard the representative, but he said that the outage was being categorized as a level 8, and if I heard correctly, I think he said that level 8 was the worst level.  I can’t say for sure on this though, because the phone connection to their support number was bad – large amounts of white noise. Good Luck! Update It appears that this outage may also be effecting the following services: SQL Database, Service Bus, Datamarket, Windows Azure Marketplace, Shared Caching, Access Control 2.0, and SQL Reporting. The note on the account page says for the South Central US region, however, I believe the representative I spoke to also mentioned North Central. As I said before though, the connection was bad. Update 2 My site regained connectivity about an hour ago, and it appears that the service dashboard is back in operation with correct status and history. It does appear that I misheard on the phone regarding multiple regions, so chances are this only effected a percentage of the platform. All in all, if this WAS their worst level of a problem, they really got it fixed and back up pretty fast. All in all, I understand that it is inherent for a complex system such as Azure to have ups and downs, but at the end of the day, I am still happy to support Azure to its fullest!

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • Naming a class that decides to retrieve things from cache or a service + architecture evaluation

    - by Thomas Stock
    Hi, I'm a junior developer and I'm working on a pet project that I want to learn as much as possible from. I have the following scenario: There's a WCF service that I use to retrieve and update data, lets say Cars. So it's called CarWCFService and has a GetCars(), SaveCar(), ... . It implements interface ICarService. This isn't the Actual WCF service but more like a wrapper around it. Upon retrieving data from the service, I want to store them in local memory, as cache. I have made a class for this called CarCacheService which also implements interface ICarService. (I will explain later why it implements ICarService) I don't want client code to be calling these implementations. Instead, I want to create a third implementation for ICarService that tries to read from the CarCacheService before calling the WCFCarService, stores retrieved data in the CarCacheService, etc. 3 questions: How do I name this third class? I was thinking about something as simple as CarService. This does not really says what the service does exactly, tho. Is the naming for the other classes good? Would this naming and architecture be obvious for future programmers? This is my biggest concern. Does this architecture make sense? The reason that I implement ICarService on the CarCacheService is mainly because it allows me to fake the WCFService while debugging. I can store dummy data in a CarCacheService instance and pass it to the CarService, together with an(other) empty CarCacheService. If I made CacheCarService and WCFService public I could let client code decide if they want to drop the caching and just work directly on the WCFService.

    Read the article

  • Communication between WCF service [library] and Self-host [Winform]

    - by Mur Haf Soz
    Introduction: I have a WCF service library and a self-host Winform. Service features is File explorer including (copy, move, delete, new folder, delete... etc) and Task Manager (run, kill, update list). Then now I want to add other features like chatting between self-host and client, send an image from client to self-host so when it received, it is shown in a pictureBox in a new form. Till now I have two endpoints for (Task Manager, File Manager) that runs under one service "MainService". And I set up all the connections using DotNet 4.0 WCF Configuration and Wizards, and I'm using netTcpBinding. Problem: I need to know how to communicate with between WCF service lib and self-host, so I can append a received chat from client on self-host form's textbox TextBoxChat. And also call a client callback from self-host when Send button clicked, to send the message from self-host textbox TextBoxMessage. Let's say this's self-host ChatForm So is it possible to do that in WCF? I would prefer to run ChatEndpoint under MainService, so all Endpoints use one port.

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Create a filter to consider http://example.com/foo/bar as http://example.com/index.php/foo/bar

    - by magnetik
    I'm using URL rewriting to make my url http://example.com/foo/bar/ to http://example.com/index.php/foo/bar. I'm not linking the index.php/.. url anywhere, but for some reasons, some users arrives to the index.php url. In Google analytics, I have a lot of duplicates that are quite annoying to follow up the traffic. I've watched the Advanced filters but I'm struggling to make it works fine. Any regex and google analytics pro to help me out ?

    Read the article

  • Accessing a Web Service: Learning Resource needed

    - by abel
    I have been searching for resources to learn (Java) Web Services. Although I have found a lot of resources and tutorials on JWS, I am confused with the version numbers, the abbreviations and Metro. Plus the last update to Metro was in 2008. Is it a worthwile thing to learn? I wanted to learn how to access Web Services, since an upcoming project is about accessing one. I have some experience with OAuth on Twitter(using code available). Things I know about the project: I have to access a Web Service. Java is the preferred platform to use(Although I know I can use any). Axis can be used to access the Web Service(I have never used Axis) I have a meeting scheduled to learn more, but I sure don't want to look silly since I am no Java expert, have never created or accessed Web Services using Java. My Questions: 1.Can someone point me to a tutorial which will help me learn how to access a already running Web Service (Preferably SOAP(?), not REST. It's XML based) 2. Will you recommend using PHP or Python to do the work of accessing the web service? I am expecting a lot of nay saying, but I hope I get some answers too. I will clarify things if needed.

    Read the article

  • The actual difference between styesheet in the header and a seperate file

    - by David Knight
    Am wondering if someone can give me an opinion on this. I have always been taught to have all of the CSS in a separate file that is referenced from the head of the page. Reading this article http://www.lukew.com/ff/entry.asp?1792 the author is talking about making the Guardian website responsive. One of the things he notes they did to make the site faster and more resilient is to add the CSS inline into the header, thus reducing HTTP requests. Now this got me thinking about the right/best/fastest way of using the CSS If you have one main CSS file, its going to be called and read by the site on every page, no mater how big it is. So with that in mind, Im actually starting to think its better to just inline the whole style sheet and remove one HTTP roundtrip. I know for the purposes of neatness and being able to edit the file a seperate file is better. But which would you recommend and which do you think is faster? Thanks!

    Read the article

  • SOA Cloud and Service Technology Symposium December 4-5th 2013 in Mexico

    - by JuergenKress
    Do you want to attend the SOA; Cloud and Service Technology Symposium December 4-5th 2013 in Mexico? Please feel free to use the promotional code “Q14CB324” for a 50% discount. Here are the Conference presentations from Partners and Oracle: "Cloud Service Brokers" Jürgen Kress, Oracle, Rolando Carrasco, S&P Solutions "Fast Data - Delivering High-Velocity and Volume Big Data Business Value in Real Time" Robin Smith, Oracle, Robert Greene, Oracle "Unlocking the Value of Big Data" Raul Goycoolea Seoane, Oracle "Modeling Business Process Architecture on BPMN 2.0 and Decomposing it to Service Inventory" Jorge Heredia, Itehl Consulting "BPM and Dynamic/Adaptive Case Management - Friends or Foes?" Manas Deb, Oracle "Building SOA and MDM Solutions to Enable Cloud Adoption" Luis Weir, HCL, John Dunn, HCL "Secure Applications in the Cloud: Security & Privacy Patterns and Mechanisms" Ricardo Puttini, University of Brasília, Anderson Nascimento, University of Brasília "SOA, Data Grids, Mobile and Clouds - Where Next for SOA?" Matt Brasier, C2B2 Consulting LTD "Achieving Greater Responsiveness with BPM" Andre Boaventura, Oracle Do you want to meet the Oracle team at the conference? Please send us a message on twitter @soacommunity. Do you want to network at the conference? Please use the #soacommunity. For details and registrations please visit the conference website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Symposium,Thmas Erl,Service Technolgy Symosium,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SOA, Could & Service Technology Symposium VIP pass 50% discount

    - by JuergenKress
    A series of podcasts, brought to you by Arcitura Education, SOASchool.com and CloudSchool.com in co-operation with the International Service Technology Symposium Conference Series, and the Prentice Hall Service Technology Series from Thomas Erl. As Part II of this Special Podcast Series, individuals will be able to tune into six distinct audio podcasts with expert speakers for the upcoming 5th International SOA, Cloud + Service Technology Symposium in London, UK on September 24-25, 2012. SOA, Cloud and Service Technology Symposium 2012 For Conference Details please visit the registration page Oracle promotion discount please enter during the registration the code DJMXZ370 Oracle Specialized SOA & BPM Partners at the conference: Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Cloud,SOA Governance,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • part of google search appliance drawing from http instead of https

    - by mcgyver5
    we are using the google search appliance in our web app. It is used by several other parts of our organization but we are using it on a web app that uses https. So, we followed google's instructions to get all the google code via https so that users don't get the annoying "This page contains both secure and insecure items" popup. Most of the google code has behaved and come to us as https, but there is a part of it pulling from http://www.google.com/cse full URL = http://www.google.com/cse?q=searchTerma&cx=001025153263958516519%3Aj2323tveixc&cof=FORID%3A11%3BNB%3A1&ie... that causes the insecure items warning to popup. This popup occurs in the results page and the above URL is the only non-secure request I can find.

    Read the article

  • Easy to use JSON Web Service Hosts?

    - by Serguei Fedorov
    I saw this being used by someone in a college class once and cannot find anything that is analogous to it. I am not sure if this is the right place to ask about something like this, but hopefully I can get some direction. I want to write an app which uses web services that can obtain and push data back to the client apps. Right now I am gathering up the design and documentation of this app. Not having to code the web service myself would reduce development time by a lot; instead using an easy to setup web service that will be easy to setup and manage. Either XML based on JSON based is totally fine; though I would prefer JSON for its reduced overhead. Like I said I have seen this demonstrated before; you define the data structure to be stored and how it is treated. I cannot find the person who demonstrated this; hopefully maybe someone can suggest something? The service he used was free with a limited amount of requests allowed. EDIT: He was using an online service to do this not a script which is installed onto an existing web hosting account. Thank you!

    Read the article

  • Can using an apt proxy (d-i mirror/http/proxy string http://mymirror) affect the installation of a .deb?

    - by Randolph
    I have been doing Ubuntu deployment using a preseed.cfg. After becoming comfortable with the packages being installed it was time to reduce download time and internet traffic by creating a mirror. I ended up doing a "partial mirror" using apt-cacher-ng and preseeding it by adding d-i mirror/http/proxy string http://mymirror to the preseed.cfg. This is where things got strange. I have a few .debs that I run as part of preseed/late_command by wgetting them and installing them with dpkg -i. The packages were installing without issue until added the proxy. With the proxy they fail to install. So does the proxy affect installing .debs during preseeded installation?

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • What are the Consequences for using Relative Location Headers?

    - by Alan Storm
    According to the spec, Location headers used in a redirect require a server name HTTP/1.1 301 Moved Permanently ... Location: http://example.com/foo/baz/bar However, in 2012, most web browsers will recognize a relative path and redirect you to the new location using the original server name HTTP/1.1 301 Moved Permanently ... Location: /foo/baz/bar Are there any negative/surprising consequences to using the relative URLs in the Location headers? My particular concern is how Google/search-engines will interpret this, but if there's anything else I'm not thinking about I'd love to hear it.

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >