Search Results

Search found 5484 results on 220 pages for 'mod headers'.

Page 118/220 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Where can i get the openal sdk for c++?

    - by Peter Short
    The OpenAL site I'm looking at is a crappy outdated and broken sharepoint portal and the SDK in the downloads section give me a 500 html code when i request it. http://connect.creativelabs.com/openal/Downloads/OpenAL11CoreSDK.zip I found an OpenAL SDK on a softpedia and it has headers but not alu.h or alut.h which the tutorials I'm looking at apparently require for loading wavs etc. What am I missing? Is OpenAL dead or something?

    Read the article

  • Where are my sub templates?

    - by Tim Dexter
    This one is for standalone/BIEE uses of Publisher. All the ERP/CRM/HCM folks are already catered for and can tuck into a nut cutlet and arugala salad. Sorry, I have just watched Food Inc and even if only half of it is true; Im still on a crusade in my house against mass produced food. Wake up World! If you have ventured into the world of sub templating, you'll be reaping some development benefit. In terms of shared report components and calculations they are very useful. Just exporting all of your report headers and footers to a single sub template can potentially save you hours and hours of work and make you look like a star. If someone in management gets it into their head that they would like Comic San Serif font rather than Arial in their report headers, its a 10 min job rather than 100 hours! What about the rest of the report content? I hear you cry. Its coming in 11g, full master template support. Your management wants bright blue borders with yellow backgrounds for all the tables in your reports, 5 minute job! Getting back to sub templates and my comment about all the ERP/CRM/HCM folks be catered for. In the standalone release there is no out of the box directory for you to drop your sub templates. Dropping them into the main report directory would make sense but they are not accessible there via a URL. An oversight on our part and something that will be addressed in 11g. Sub templates are now a first class citizen in the world of BIP, you can upload them and BIP will know what to do with them. But what do you do right now? The easiest place to put them where BIP can 'see' them is to create a directory under the xmlpserver install directory in the J2EE container e.g. $J2EE_HOME/xmlpserver/xmlpserver/subtemplates You can call it whatever you want but when the server is started up, that directory is accessible via a URL i.e. http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf. You can therefore put it into the top of your main templates and call the sub template. <?import: http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf?> Of course, you can drop them anywhere you want, they just need to be in a web server mountable directory. Enjoy the arugala!

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What are these errors when I try to "make" the driver of my wireless adapter?

    - by Tom Brito
    I got got a wireless to usb adapter, and I'm having some trouble to install the drivers on Ubuntu. First of all, the readme says to use the make command, and I already got errors: $ make make[1]: Entering directory `/usr/src/linux-headers-2.6.35-22-generic' CC [M] /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c: In function ‘rtl8192_usb_probe’: /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12325: error: ‘struct net_device’ has no member named ‘open’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12326: error: ‘struct net_device’ has no member named ‘stop’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12327: error: ‘struct net_device’ has no member named ‘tx_timeout’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12328: error: ‘struct net_device’ has no member named ‘do_ioctl’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12329: error: ‘struct net_device’ has no member named ‘set_multicast_list’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12330: error: ‘struct net_device’ has no member named ‘set_mac_address’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12331: error: ‘struct net_device’ has no member named ‘get_stats’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12332: error: ‘struct net_device’ has no member named ‘hard_start_xmit’ make[2]: *** [/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o] Error 1 make[1]: *** [_module_/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-22-generic' make: *** [all] Error 2 /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/ is the path where I copied the drivers on my computer. Any idea how to solve this? (I don't even know what the error is...) update: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 03 serial: 78:e3:b5:e7:5f:6e size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:42 ioport:d800(size=256) memory:fbeff000-fbefffff memory:faffc000-faffffff memory:fbec0000-fbedffff *-network DISABLED description: Wireless interface physical id: 2 logical name: wlan0 serial: 00:26:18:a1:ae:64 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=802.11b/g

    Read the article

  • iscsitarget suddenly broken after upgrade of the 12.04 Hardware Stack

    - by RapidWebs
    After an upgrade to the latest Hardware Stack using Ubuntu 12.04, my iscsi service is not longer operational. The error from the service is such: FATAL: Module iscsi_trgt not found. I have learned that I might need to reinstall the package iscsitarget-dkms. this package builds a driver or something during installation, from source. During this build process, it reports and error, and now has also broke my package manager. Here is the relevant output: Building module: cleaning build area.... make KERNELRELEASE=3.13.0-34-generic -C /lib/modules/3.13.0-34-generic/build M=/var/lib/dkms/iscsitarget/1.4.20.2/build........(bad exit status: 2) Error! Bad return status for module build on kernel: 3.13.0-34-generic (i686) Consult /var/lib/dkms/iscsitarget/1.4.20.2/build/make.log for more information. Errors were encountered while processing: iscsitarget E: Sub-process /usr/bin/dpkg returned an error code (1) and this is the information provided by make.log: or iscsitarget-1.4.20.2 for kernel 3.13.0-34-generic (i686) Fri Aug 15 22:07:15 EDT 2014 make: Entering directory /usr/src/linux-headers-3.13.0-34-generic LD /var/lib/dkms/iscsitarget/1.4.20.2/build/built-in.o LD /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/built-in.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/tio.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/iscsi.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/nthread.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c: In function ‘worker_thread’: /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:73:28: error: void value not ignored as it ought to be /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:3: error: implicit declaration of function ‘get_io_context’ [-Werror=implicit-function-declaration] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:21: warning: assignment makes pointer from integer without a cast [enabled by default] cc1: some warnings being treated as errors make[2]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o] Error 1 make[1]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel] Error 2 make: * [module/var/lib/dkms/iscsitarget/1.4.20.2/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.13.0-34-generic' I am at a loss on how to resolve this issue. any help would be appreciated!

    Read the article

  • If the net is required to install an Atheros 8161 driver,how do I connect to the net without the driver?

    - by Paul
    If Ubuntu does not recognize hardware to connect to the net, and a net connection is necessary in order to install drivers for hardware that connects to the net, then how is such a system ever going to connect to the net? You can see the situation in this thread: How do I install drivers for the Atheros AR8161 Ethernet controller? and in this thread: build-essential and linux-headers-generic gives abort message Surely, surely, there is a way out of this catch-22.

    Read the article

  • last-modified/etags - to include or not?

    - by Kae Verens
    Google's PageSpeed plugin suggests that a website should include Last-Modified and ETag headers: Specify a cache validator "Resources that do not specify a cache validator cannot be refreshed efficiently. Specify a Last-Modified or ETag header to enable cache validation" However, Apache suggests that by not including them at all, we speed up websites by eliminating If-Modified-Since and If-None-Match requests: http://www.askapache.com/htaccess/apache-speed-last-modified.html these are in direct opposition - which should be implemented? I'm leaning towards Apache's suggestion, as when I want a file cached, I don't want it refreshed.

    Read the article

  • Can't update nor install anything due to "unrecoverable fatal error"

    - by Dunya Cengiz
    I haven't done any changes to my Ubuntu but since yesterday I can't download anything from Software Center nor can I update. All the time I get an error. I don't know why is this happening. I am currently using Ubuntu 12.04. Important part of the message from image: dpkg: unrecoverable fatal error, aborting: reading files list for package 'linux-headers-3.2.0-24': input/output error Error in function:

    Read the article

  • What are these errors when I try to "make" the driver of my wireless network?

    - by Tom Brito
    I got got a wireless to usb adapter, and I'm having some trouble to install the drivers on Ubuntu. First of all, the readme file say to use the "make" command, and I already got errors: $ make make[1]: Entering directory `/usr/src/linux-headers-2.6.35-22-generic' CC [M] /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c: In function ‘rtl8192_usb_probe’: /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12325: error: ‘struct net_device’ has no member named ‘open’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12326: error: ‘struct net_device’ has no member named ‘stop’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12327: error: ‘struct net_device’ has no member named ‘tx_timeout’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12328: error: ‘struct net_device’ has no member named ‘do_ioctl’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12329: error: ‘struct net_device’ has no member named ‘set_multicast_list’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12330: error: ‘struct net_device’ has no member named ‘set_mac_address’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12331: error: ‘struct net_device’ has no member named ‘get_stats’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12332: error: ‘struct net_device’ has no member named ‘hard_start_xmit’ make[2]: *** [/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o] Error 1 make[1]: *** [_module_/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-22-generic' make: *** [all] Error 2 /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/ is the path where I copied the drivers on my computer. Any idea how to solve this? (I don't even know what the error is...) update: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 03 serial: 78:e3:b5:e7:5f:6e size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:42 ioport:d800(size=256) memory:fbeff000-fbefffff memory:faffc000-faffffff memory:fbec0000-fbedffff *-network DISABLED description: Wireless interface physical id: 2 logical name: wlan0 serial: 00:26:18:a1:ae:64 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=802.11b/g

    Read the article

  • Evolution not downloading messages for offline use

    - by RandomX678
    Even though I've selected "Copy folder content locally for offline operation" on my inbox and "Automatically synchronize account locally" in my account options, Evolution still only downloads headers for my inbox. When I click an email, its downloaded from the server. I want all my emails to be available to read offline - As far as I know this should be possible, so is my configuration that's wrong, or is it a bug?

    Read the article

  • Http header 302 error

    - by Katherine Katie
    Response Headers status HTTP/1.1 302 Found connection close pragma no-cache cache-control no-cache location / location /NKiXN/ I don't know how it became so but i used w3 total cache plugin but at this time i have deactivated it please help me to solve it, site is coming down in search engine ranking and google bot is unable to follow it. Urgent help required, if this is configuration problem with server please let me know the solution. Site: http://onlinecheapestcarinsurance.co.uk/

    Read the article

  • Should I set NOINDEX header for my JS, CSS and image files?

    - by Yoga
    Are there any harms if my site send NOINDEX headers for all my static assets? For image files, I refer to those valueless, e.g. background images, button images, etc. Update: more background information I have this concern is since recent Google said they also execute JS and they might fetch content via Ajax. So, for example, if I send noindex for my jQuery script, so Google would not be able to use them to load Ajax, I suppose it is not good for my site's SEO, right?

    Read the article

  • Is using HTML entities (for language-specific characters) in UTF-8 necessary?

    - by Drachenzauberei
    As in the subject-line. Saw the situation the other day on a page which felt weird to me. Except for markup-delimiting characters such as pointy brackets or the ampersand, escaping, say, German umlauts shouldn't be necessary, should it? Checked the encoding server-side, in-page and by way of HTTP headers, looks completely UTF-8 to me. What's your take on this and do you reckon it could adversely affect SEO or SERP placement?the page

    Read the article

  • problems in fetching upgrades

    - by andre
    presently using ubuntu 10.04 and I would like to upgrade to the latest ubuntu release but while trying to ugrade I have this errors . Can anybody help me how to solve this and proceed..just a beginner in using ubuntu. thanks W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-image-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-headers-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80]

    Read the article

  • How to Use Heading Tags and Alt Attributes

    If you're working with HTML code for the first time, you may be wondering why Heading tags and alt attributes are used as a guide to data within your site's architecture. Search Engine Optimization consultants use HTML header tags to summarize the topic of the page it introduces. Google and other search engines analyze text inside header tags in their algorithm to assign web page rankings. To rank for your target keywords and phrases, incorporate them into your HTML page headers.

    Read the article

  • Important SEO Elements When Designing Your Website

    On page optimization is an important factor for determining your website theme, especially for Yahoo and Bing, so you have to consider SEO elements while designing your website, putting your keywords in page's titles and headers are good for your visitors, and good for the search engines. You should make sure that the front of your title is containing the keywords that are important to your SEO efforts, because Google only picks up the first sixty to seventy characters. Also, it is important to use natural languages in your content and don't try...

    Read the article

  • How to Get Information About Your Backups

    When you need to restore but aren't 100% sure about the contents of your backup files, what do you do? Head to the headers. Grant Fritchey explains how to find the useful bits in these huge stores of information and make sure you restore the right files. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • Installing NVIDIA driver causes black screen (750M)

    - by aftrumpet
    I have a dual boot set up on a Lenovo Ideapad Y500 with NVIDIA 750M and I am having problems installing the graphics cards. I have made sure to install both linux-headers-generic and linux-source, and yet have ended up with a black screen whether I install nvidia-current, nvidia-current-updates, nvidia-experimental-310, and nvidia-319. I even tried enabling proprietary drivers through settings and still ended up with a black screen on boot. Is my graphics card just not supported yet, or is there a way to fix this?

    Read the article

  • Can't Install Packages ubuntu 12.04

    - by Pumpkin
    I'm trying to install software by using apt-get and ubuntu software center UI, when I try the ubuntu software center it gets stuck saying installing and installs nothing and when I try the apt-get it gets stuck at the point "0% [Waiting for headers]" I tried modifying sources.list file. I installed ubuntu this morning. Do you have any suggestions for me ? Thanks in advance for your time. edit : After a while, I get the error 502 bad gateway

    Read the article

  • Adding gradient header to your report

    - by SSRSGeek
    As in normal websites , we as web developers , like to have gradient headers in our reports, the Idea is very simple. First add an image to your report, I will call it HeaderStrip1   On the properties choose the background and choose the source (Embedded) since we add the image to the Report Choose Value as HeaderStrip1       Make sure that the BackgorundRepeat is "RepeatX"       Final Result :D

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • VMware Tools in Ubuntu guest on VMware Server 2 do not build

    - by ulf
    When trying to build the VMware tools in my Ubuntu 9.10 64 bit guest on a VMware Server 2.0.2 host with Debian 5 I'm getting strange errors like: Building the vmmemctl module. Using 2.6.x kernel build system. make: Gehe in Verzeichnis '/tmp/vmware-config8/vmmemctl-only' make -C /lib/modules/2.6.31-19-server/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules make[1]: Betrete Verzeichnis '/usr/src/linux-headers-2.6.31-19-server' CC [M] /tmp/vmware-config8/vmmemctl-only/backdoorGcc64.o In file included from /tmp/vmware-config8/vmmemctl-only/backdoor.h:29, from /tmp/vmware-config8/vmmemctl-only/backdoorGcc64.c:38: /tmp/vmware-config8/vmmemctl-only/vm_basic_types.h:108:7: warning: "__FreeBSD__" is not defined CC [M] /tmp/vmware-config8/vmmemctl-only/os.o In file included from /tmp/vmware-config8/vmmemctl-only/os.c:51: /tmp/vmware-config8/vmmemctl-only/compat_wait.h:78: error: conflicting types for ‘poll_initwait’ include/linux/poll.h:70: note: previous declaration of ‘poll_initwait’ was here make[2]: *** [/tmp/vmware-config8/vmmemctl-only/os.o] Fehler 1 make[1]: *** [_module_/tmp/vmware-config8/vmmemctl-only] Fehler 2 make[1]: Verlasse Verzeichnis '/usr/src/linux-headers-2.6.31-19-server' make: *** [vmmemctl.ko] Fehler 2 make: Verlasse Verzeichnis '/tmp/vmware-config8/vmmemctl-only' Unable to build the vmmemctl module. I googled half the Internet but couldn't come to a solution. None of the kernel modules seems to build correctly. While googling I read something about a bug in this kernel tree.

    Read the article

  • SPF hardfail and DKIM failure when recipient has e-mail forwarding

    - by Beaming Mel-Bin
    I configured hardfail SPF for my domain and DKIM message signing on my SMTP server. Since this is the only SMTP server that should be used for outgoing mail from my domain, I didn't foresee any complications. However, consider the following situation: I sent an e-mail message via my SMTP server to my colleague's university e-mail. The problem is that my colleague forwards his university e-mail to his GMail account. These are the headers of the message after it reaches his GMail mailbox: Received-SPF: fail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) client-ip=192.168.128.100; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) [email protected]; dkim=hardfail (test mode) [email protected] (Headers have been sanitized to protect the domains and IP addresses of the non-Google parties) GMail checks the last SMTP server in the delivery chain against my SPF and DKIM records (rightfully so). Since the last STMP server in the delivery chain was the university's server and not my server, the check results in an SPF hardfail and DKIM failure. Fortunately, GMail did not mark the message as spam but I'm concerned that this might cause a problem in the future. Is my implementation of SPF hardfail perhaps too strict? Any other recommendations or potential issues that I should be aware of? Or maybe there is a more ideal configuration for the university's e-mail forwarding procedure? I know that the forwarding server could possibly change the envelope sender but I see that getting messy.

    Read the article

  • Trying to install ffmpeg-php and having installation issues.

    - by dallasclark
    I've installed ffmpeg successfully using the ffmpeginstaller 3 series (http://www.ffmpeginstaller.com/download). ffmpeg is working fine without any known issues with bash. The ffmpeginstaller is meant to install ffmpeg-php but it cannot be found and I receive an error when I execute php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 Looking at the '/usr/lib64/php/modules/' folder, it doesn't contain the ffmpeg.so file. I've tried to install ffmpeg-php manually but I receive the following error checking for ffmpeg headers... configure: error: ffmpeg headers not found. Make sure you've built ffmpeg as shared libs using the --enable-shared option Should I install ffmpeg with series 4 or 5 of ffmpeginstaller or does someone know how to fix this issue? Thanks in advance ! System Specs cat /etc/redhat-release CentOS release 5.5 (Final) cat /proc/version Linux version 2.6.18-028stab068.5 (root@rhel5-64-build) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP 5.2.13 (cli) (built: Mar 2 2010 18:08:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Any other details you need, just let me know.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >