Search Results

Search found 21004 results on 841 pages for 'assembly load'.

Page 496/841 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor disappeared on ubuntu 11.10

    - by Bob
    I have an Ubuntu 11.10 server that has been up for 210 days. I have been frequently doing apt-get upgrade every few weeks, and this time I noticed that my server load average just shot up. The last time this happened between upgrades, it was because the cpu scaling governor was set to ondemand. But this time when I tried to list the contents of /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor the file is missing. There isn't even a cpufreq folder anymore! How do I fix this and ensure there is no cpu scaling going on?

    Read the article

  • Godaddy vs. Route53 for DNS

    - by tim peterson
    I have my website set up as an EC2 instance and my DNS is currently Godaddy. I'm considering switching to Amazon AWS Route53 for DNS. The one thing I noticed however is that Route53 charges monthly fees but I never get any bills from Godaddy. Obviously, nobody likes getting charged for something they can get for free. If Godaddy is cheaper, can anyone confirm that the page load speed of an EC2 instance is actually better via Route53 vs. Godaddy? If it is not faster or cheaper, can someone point out other reasons it might make sense to do this switch? thanks, tim

    Read the article

  • Gmail firing background tasks in Chrome and Firefox

    - by Keith Bloom
    I am experiencing a strange problem with Gmail running on Vista and it started when WindowsUpdate last patched my computer. Since then when Gmail is open in either Chrome or Safari the pointer changes to the Working in Background state every couple of seconds. You can imagine how distracting this is. I have never used Gmail in IE before so I loaded it up, Gmail wanted to load the GoogleTalk ActiveX control which I declined. Now Gmail in IE, I'm using version 8, didn't fire these background processes so I'm thinking it has something to do with GoogleTalk. Has anyone experienced this problem? Also does anyone know how to disable GoogleTalk in FireFox of Chrome.

    Read the article

  • Apache2 enable .ini mod in /etc/php5/mods-available

    - by GuiTeK
    One can use the a2enmod [module] command to enable mods located in /etc/apache2/mods-available. But what about mods in /etc/php5/mods-available? When I try to enable a mod in this directory (eg. xdebug), I get the following error: ERROR: Module xdebug does not exist! Yet, /etc/php5/mods-available/xdebug.ini exists. I understand a2enmod may work only with *.load files (it makes sense since *.ini files are just configuration files) but then what's the correct way of enabling modules located in /etc/php5/mods-available?

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • SELinux Contexts

    - by Josh
    I am configuring Apache\PHP and noticed Apache complaining about permissions with the php shared object. Starting httpd: httpd: Syntax error on line 206 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib/httpd/modules/libphp5.so into server: /usr/lib/httpd/modules/libphp5.so: cannot restore segment prot after reloc: Permission denied I looked at the context (started fine with enforcement off) and found: [root@HDSSERVER conf]# ls --lcontext /usr/lib/httpd/modules/libphp5.so -rwxr-xr-x 1 root:object_r:httpd_modules_t root root 15565418 May 10 08:39 /usr/lib/httpd/modules/libphp5.so Shouldn't httpd (apache) be able to access files with a context of httpd_modules_t? I got it fixed by applying chcon -t textrel_shlib_t '/usr/lib/httpd/modules/libphp5.so' But I would think the httpd version would work before this one. Can someone explain this to me?

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • continuous hard disk access - slowing down my machine

    - by suresh
    I find from the hard disk access LED on the front of my machine that the hard disk is being accessed more often and probably because of that, my machine is quite slow. The machine becomes unresponsive even when the load as seen from w command is around 1 or so. My desktop is optiplex 360 dell machine running Ubuntu 10.04. My questions are: How to quantify hard disk access and how to see if it is more than "normal" ? If it is more than normal, what are my solutions ? thanks suresh

    Read the article

  • installing and running google-chrome on an old Ubuntu 7.10 legacy system

    - by 12632
    I am trying to get google-chrome to work on Ubuntu 7.10. I installed it with --force-depends and got it to install, but now when I try to run it, I get this error: /usr/bin/google-chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory Is there a way to still get google-chrome to load even without this dependency satisfied? This is an old system that needs to keep this old 7.10 Ubuntu version and I would like to have google-chrome if possible installed, even if it means no sound or other features that are not compatible.

    Read the article

  • double screen in ubuntu 12.04?

    - by johan
    I am using ubuntu 12.04 and my video card is ATI Radeon 5000. I cannot use double screen (extended version). I get this error The selected configuration for displays could not be applied requested position/size for CRTC 148 is outside the allowed limit: position=(1280, 0), size=(1280, 768), maximum=(1440, 1440) I tried all display settings but it does not work. Some outputs from the system settings: root@ubuntu:~# lshw -C display *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff root@ubuntu:~# aticonfig --initial Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.original-0 root@ubuntu:~# cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I would appreciate any suggestions how to solve the problem. Thank you

    Read the article

  • Issue in extending webapplication sharepoint

    - by GHIYA
    I have extended a webapplication in a farm. main server vsmoss1 where i did vsmoss1 ->webapplication(80) vm.com -> extended web app(of above one)anonymous WFE server name vsmoss2 WFE server name vsmoss3 i have load balanced it to got to vsmoss2 and vsmoss3 when someone hits vm.com when i hit vm.com it works fine without authentication(shows content query webpart also on my page) I know there is no need to do that but when I hit vsmoss2 and vsmoss3 it shows me error on my content query webpart ....any solution for that? Finding this strange tried this : I closed both extended webapp in vsmoss2 and vsmoss3 result: site is up and running but this time with authentication I closed both extended and main webapplication site in vsmoss2 and vsmoss3 is down I closed main webapplication in vsmoss2 and vsmoss3 site is up and running without authentication Anyone is having idea why this is showing behaviour like this...?

    Read the article

  • Server-to-Switch Trunking in Procurve switch, what does this mean?

    - by MattUebel
    I am looking to set up switch redundancy in a new datacenter environment. IEEE 802.3ad seems to be the go-to concept on this, at least when paired with a technology that gets around the "single switch" limitation for the link aggregation. Looking through the brochure for a procurve switch I see: Server-to-Switch Distributed Trunking, which allows a server to connect to two switches with one logical trunk; increases resiliency and enables load sharing in virtualized data centers http://www.procurve.com/docs/products/brochures/5400_3500%20Product%20Brochure4AA0-4236ENW.pdf I am trying to figure out how this relates to the 802.3a standard, as it seems that it would give me what I want (one server has 2 nics, each of which is connected to separate switches, together forming a single logical nic which would provide the happy redundancy we want), but I guess I am looking for someone familiar with this concept and could add to it.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Can't install Ubuntu on Asus Eee 1015pem

    - by Peter
    I'm having trouble to install Ubuntu. I use a ASUS Eee 1015pem netbook. Recently, I my netbook got wet. I had it inside my backback and all my things got wet. The netbook boots up fine but it will not load the OS. I downloaded ubuntu onto my external hard drive and changed the settings in my Bios to boot from a removable device. Nothing happens. When I plug in my external hard drive I'm not able to get to the boot icon. I have to unplug it the external hard drive. Set my boot settings I tried both Removable and CD-Rom. Than I plug my external drive back in and nothing happens on either settings. My Asus never came with a recovery disk and suppose to have a build in recovery by pressing F9 in the Bios. Also I need to disable Boot Booster in Bios and Boot Booster is not even an opition in Bios. My friend told me try installing Ubuntu but now I'm having no luck with Ubuntu. Any suggestions?

    Read the article

  • Why use FQDN as DNS-server option in DHCP?

    - by Filip Haglund
    I've seen multiple default configurations of DHCP-servers with a FQDN set as the DNS-server option. Doesn't this imply a catch-22, or the need for that DNS-server to be in the hosts file of every single client? example from dhcp3-server in debian 6: option domain-name-servers ns1.internal.example.org; I can see how using a dns name is convenient because it's only an A-record to change, and they can be load balanced if wanted, but I don't see how the client is going to resolve the name. Why are people using FQDN's as DNS-server addresses in DHCP?

    Read the article

  • How to manage security of these self hosted web apis, to ensure that the request coming for accessing data is authenticated?

    - by Husrat Mehmood
    Let's pretend I am going to work on an enterprise application. Say I have 11 modules in the application and I would have to develop Dashboards for every role in the organization for whom I are going to develop application. We Decided to use Asp.Net Web Api and return json data from our apis. We are going to include 11 Self hosted web apis projects in our application (one self hosted web api) for every module. All 11 modules are connected to one Sql server 2012 Database. Then once api is ready we would have to create Business Dashboards (Based upon roles in Organization). So Now my web api client is Asp.Net Mvc application.Asp.Net mvc will consume those web apis. Here is the part for whom all explanation is done. How should I manage Security of all 11 self hosted web apis? How should I only authenticated request is coming? If I authenticate user by login and password and then redirect user to appropriate Dashboard designed for the role that user have and load data by consuming web apis. How should I ensure that the request coming for accessing data is authenticated?

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • Internet connection very slow after Linksys configuration

    - by NLV
    Hello We have this network setup Server1 - DHCP server, Domain Controller, AD Lease line for Internet connection From lease line to Linksys router (we dont use wireless though) From linksys to Netgear (24 port Switch) and vonage (VoIP) Netgear to all our machines We configured Linsys with the static IP and DNS server addresses our ISP gave and we have routed it correctly. All our work machines are configured with Get IP automatically DNS server addresses our ISP gave The problem is that none of the sites are getting opened promptly. It is taking around 5 minutes to load google.com. But we are able to ping all the sites. What could be the problem?

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • How to generate a private/public key pair to use for a Linux server on Windows Azure?

    - by MainMa
    Following Windows Azure documentation, I generated a pair of private/public keys on an Ubuntu machine using the exact comment as given: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When I open the private key in puttygen, the following error is displayed: Couldn't load private key (unrecognised key type) The private key generated by openssl looks correct: -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG6w0xAQEFAASCBKcwggSjAgEsAoIBAQC6OEZ5ULe6F6u2 Cybhqqfqqh2ao9sd2tpqB+HGIoMMHrmnD3YegRgZJIddTQaWKdwaKrYul21YNt5y ... P0RyfL9kDnX/XmIOM38FOoucGvO+Zozsbmgmvw6AUhE0sPhkZnlaodAU1OnfaWJz KpBxkXulBaCJnC8w29dGKng= -----END PRIVATE KEY----- Note that the comments to Azure documentation (the same link as above) report that the pair should be generated using OpenSSL for Windows instead of openssl on Linux. This doesn't help, since the same error appears for a private key generated by OpenSSL for Windows. What am I doing wrong?

    Read the article

  • Does disable log error for MySQL increasing it's performance ? How disable it?

    - by adnan
    Does disable log error for MySQL increasing it's performance ? How disable it ? This is my service status Server load 0.63 (8 CPUs) Memory Used 23.38% (957,600 of 4,096,000) Swap Used 0% (0 of 1) And this is print screen for process manager http://elnhrda.com/promgr.jpg This is my.cnf [mysqld] query_cache_size=64M skip-name-resolve #innodb_file_per_table=1 query_cache_limit=2M read_buffer_size = 2M read_rnd_buffer_size = 16M sort_buffer_size = 8M join_buffer_size = 8M thread_cache_size = 8 thread_concurrency = 8 innodb_buffer_pool_size = 2G Iam looking for doing any thing to increase my website speed I have VPS 4G.B RAM CENTOS 6 X86_64 Note please : this statics taken now which no any queries executed & site have not any visitors in the same time

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >