Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 567/1073 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • Problem inserting pre-ordered values to quadtree

    - by Lucas Corrêa Feijó
    There's this string containing B, W and G (standing for black, white and gray). It defines the pre-ordered path of the quadtree I want to build. I wrote this method for filling the QuadTree class I also wrote but it doesn't work, it inserts correctly until it reaches a point in the algorithm it needs to "return". I use math quadrants convention order for insertion, such as northeast, northwest, southwest and southeast. A quadtree has all it's leafs black or white and all the other nodes gray The node used in the tree is called Node4T and has a char as data and 4 references to other nodes like itself, called, respectively NE, NW, SW, SE. public void insert(char val) { if(root==null) { root = new Node4T(val); } else { insert(root, val); } } public void insert(Node4T n, char c) { if(n.data=='G') // possible to insert { if(n.NE==null) { n.NE = new Node4T(c); return; } else { insert(n.NE,c); } if(n.NW==null) { n.NW = new Node4T(c); return; } else { insert(n.NW,c); } if(n.SW==null) { n.SW = new Node4T(c); return; } else { insert(n.SW,c); } if(n.SE==null) { n.SE = new Node4T(c); return; } else { insert(n.SE,c); } } else // impossible to insert { } } The input "GWGGWBWBGBWBWGWBWBWGGWBWBWGWBWBGBWBWGWWWB" is given a char at a time to the insert method and then the print method is called, pre-ordered, and the result is "GWGGWBWBWBWGWBWBW". How do I make it import the string correctly? Ignore the string reading process, suppose the method is called and it has to do it's job.

    Read the article

  • Shooting Print Quality Pictures with a Camera Phone [Video]

    - by Jason Fitzpatrick
    Camera phones get a lot of grief for being underpowered but in this video tutorial the crew at SLR Lounge shows how basic technique and a good eye overcome all. Last year over at the photo blog FStoppers they put together a video showing off how you could use the iPhone as a fashion camera–essentially arguing that the camera wasn’t as important as the photographer. A lot of people said “Well yeah, but you had professional models and thousands of dollars in lighting equipment!” in reaction to the video. In turn the crew at SLR Lounge decided to make their own video showing that using only an iPhone camera and two reflectors (as well as an attractive but informal model). It of course helps to have some side kicks to help hold up your reflectors but the point still stands about modern camera phones being perfectly capable of good photos. The SLR Lounge iPhone Photo Shoot – A Follow Up Tribute to The FStoppers [YouTube] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

  • Input/Output console window in XNA

    - by Will Bagley
    I am currently making a simple game in XNA but am at a point where testing various aspect gets a bit tricky, especially when you have to wait till you have 1000 score to see if your animation is playing correctly etc. Of course i could just edit the starting variable in the code before I launched but I have recently been interested in trying to implement a console style window which can print out values and take input to alter public variables during run-time. I am aware that VS has the immediate window which achieves a similar thing but i would prefer mine is an actual part of the game with the intention that the user may have limited access to it in the future. Some of the key things i have yet to find an answer to after looking around for a while are: how i would support free text entry how i would access variables during runtime how i would edit these variable I have also read about using a property grid from windows form aps (and partially reflection) which looked like it could simplify a lot of things but i am not sure how I would get that running inside my XNA game window or how i would get it to not look out of place (as the visual aspect of is seems to be aimed just for development time viewing). All in all I'm quite open to any suggestions on how to approach this task as currently I'm not sure where to start. Thanks in advance.

    Read the article

  • Fusion HCM in Boots

    - by Kristin Rose
    These boots are made for walking, and that’s just what they’ll do…Of course by boots, we’re referring to Oracle’s HCM Boot Camps for OPN members, which offer a hands-on approach to learning about Oracle Fusion HCM and Taleo positioning and capabilities. Those who attend an Oracle HCM boot camp will be prepared to achieve Oracle Fusion HCM Presales Specialist status, discuss Oracle Fusion HCM with customers to build pipeline, and complete competency criteria toward Oracle Fusion HCM 11g Specialization! This in-person event offers expert-led sessions, discussion, and hands-on activities meaning you will get the information quicker and remember it better! Plus, we think a free lunch is always a good thing. As a next step, all interested partners should: Obtain self-service knowledge from the Oracle Fusion Human Capital Management 11g PreSales Specialist Guided Learning Path. Become a Specialist by completing the Oracle Fusion Human Capital Management 11g PreSales Specialist Assessment . Contact their regional Oracle Alliances & Channels point-of-contact to learn more about these free OPN Boot Camp events, and the opportunity to attend the next one. We know you’ll be strutting your stuff after you've gained the knowledge and expertise to become Oracle Fusion HCM Specialized! Check it out! The OPN Communications Team 

    Read the article

  • Why do some PDFs lag in Adobe Acrobat?

    - by Coldblackice
    I have a handful of PDFs open. One of them in particular is extremely laggy, almost to the point of being unreadable. When I scroll through its pages, it's almost like an extreme version of v-sync being turned off. Very choppy. Overall system resources are plentiful, and all of the other PDFs cruise up and down with no stuttering or problems. I've tried closing and reopening the problem PDF to no avail. It's a small PDF, only 3MB in size, with no graphics (only programming code snippets). Surely, it must be some type of problem with the specific PDF (I'll try opening it in another PDF-viewing program, rather than Acrobat X). Possible corruption? Could there be some type of GPU/hardware-acceleration intervening going on? I've never heard of such with PDF-viewing.

    Read the article

  • Mac OS X Server 10.6.6 "disables" CPU in VMware Fusion

    - by wjlafrance
    Hello! I installed Mac OS X Server 10.6.0 in VMWare Fusion the other night and it worked perfectly, until I ran Software Update. I upgraded to 10.6.6 through the combo updater, and now when I start the VM it says: "The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point." I've switched the operating system in the options to OS X Server 32bit, 64bit, and even to Windows 7, and nothing has worked. Does anyone have any ideas?

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • SMTP server to deliver mail to Rails app, how?

    - by Gunchars
    all, this is my first question and I hope I chose the right place to post it. Here's what I need help with: I've been looking for this all day and I'm having a hard time finding a SMTP mail server that would fit the following criteria: lightweight, does one thing and does it good is able to route and deliver local mail to a Rails application The second point could be accomplished in any number of ways. I'm running a VPS, so I have full freedom in how to implement this. It could, for example, put messages straight in the db, pipe them to a helper program that would then process them accordingly or also save messages in a mbox file and run a script after every received message. I'm building a small site so the traffic is not going to be a problem. If there are alternative ways to deliver messages to a Rails app, I'd gladly hear about them. Thank you. EDIT: After long searching, I think I've found what I was looking for. Exim is a mail server that can deliver local mail to pipes. Also, Rails 3 and ActionMailer can make it really easy to process the incoming mail. More info here: http://www.exim.org/exim-html-current/doc/html/spec_html/ch29.html http://guides.rubyonrails.org/action_mailer_basics.html#receiving-emails

    Read the article

  • Windows won't ctrl+c and ctrl+v

    - by Thomasstuck
    Using Ctrl+C and V I am unable to copy or paste anything. I have tried many different applications and still no luck. I am able to copy/paste via the shortcut menu when you right click the mouse. Ctrl+F works which means the Ctrl button is functional. I've tried restoring to an earlier restore point. I also ran SFC /SCANNOW (System File Checker) which did not find any errors. Ctrl+C and then right click and paste doesn't work either. I have tried with external and internal keyboard and still no luck. I've clicked on all the relevant search results from Google but none of them work for me. Does anyone know what may be causing this issue and how I can correct it? This problem comes and goes at random times.

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Get 100 highest numbers from an infinite list

    - by Sachin Shanbhag
    One of my friend was asked this interview question - "There is a constant flow of numbers coming in from some infinite list of numbers out of which you need to maintain a datastructure as to return the top 100 highest numbers at any given point of time. Assume all the numbers are whole numbers only." This is simple, you need to keep a sorted list in descending order and keep a track on lowest number in that list. If new number obtained is greater than that lowest number then you have to remove that lowest number and insert the new number in sorted list as required. Then question was extended - "Can you make sure that the Order for insertion should be O(1)? Is it possible?" As far as I knew, even if you add a new number to list and sort it again using any sort algorithm, it would by best be O(logn) for quicksort (I think). So my friend told it was not possible. But he was not convinced, he asked to maintain any other data structure rather than a list. I thought of balanced Binary tree, but even there you will not get the insertion with order of 1. So the same question I have too now. Wanted to know if there is any such data structure which can do insertion in Order of 1 for above problem or it is not possible at all.

    Read the article

  • Searching for entity awareness in 3D space algorithm and data structure

    - by Khanser
    I'm trying to do some huge AI system just for the fun and I've come to this problem. How can I let the AI entities know about each other without getting the CPU to perform redundant and costly work? Every entity has a spatial awareness zone and it has to know what's inside when it has to decide what to do. First thoughts, for every entity test if the other entities are inside the first's reach. Ok, so it was the first try and yep, that is redundant and costly. We are working with real time AI over 10000+ entities so this is not a solution. Second try, calculate some grid over the awareness zone of every entity and test whether in this zones are entities (we are working with 3D entities with float x,y,z location coordinates) testing every point in the grid with the indexed-by-coordinate entities. Well, I don't like this because is also costly, but not as the first one. Third, create some multi linked lists over the x's and y's indexed positions of the entities so when we search for an interval between x,y and z,w positions (this interval defines the square over the spatial awareness zone) over the multi linked list, we won't have 'voids'. This has the problem of finding the nearest proximity value if there isn't one at the position where we start the search. I'm not convinced with any of the ideas so I'm searching for some enlightening. Do you people have any better ideas?

    Read the article

  • Brain picking during job interview

    - by mark
    Recently, I had a job interview at a big Silicon Valley company for a senior software developer/R&D position. I had several technical phone screens, an all day on-site interview and more technical phone screens for another position later. The interviews went really well, I have a PhD and working experience in the area I was applying for yet no offer was made. So far, so good. It was an interesting experience, I am employed, absolutely no hard feelings about this. Some of the interviewers asked really detailed questions to the point of being suspicious about new technologies I have been working on. These technologies are still in development and have not come to market yet. I know some major hardware/software companies are working on this too. I have had many interviews before and based on my former interviewing experience and the impression some of the interviewers left behind, I know now all this company wanted from me is to extract some ideas about what I did in this field. Remember, I am referring to a R&D position, not the standard software developer stuff. Has anybody encountered this situation so far? And how did you deal with it? I am not so much concerned about "stealing" ideas but more about being tricked into showing up for an interview when there is no intension to hire anyway. I am considering refusing technical interviews in the future and instead proposing a trial period in which the company can easily reconsider its hiring decision.

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • understanding mount -o bind

    - by Ionut
    Few questions after the following commands: mount -o bind /new_disk/home/user/ /home/user/ mount -o bind --no-mtab /new_disk/home/user/ /home/user/ What is the difference between the two commands other than " Mount without writing in /etc/mtab. This is necessary for example when /etc is on a read-only filesystem." What is the difference between mount -o bind and mount --bind ...if there are Let's suppose i don't know there is a partition mounted using -o bind --no-mtab...where can I find if there is any mound point with bind ? The only way i can detect this is grep user /proc/mounts but in that line there is no info abut bind. Thank you.

    Read the article

  • Is it possible to combine two internet connections to increase performance?

    - by cornjuliox
    I've got a small home network, 3 PCs plus a laptop or two when the relatives come to visit, connected to a single cable internet connection. Now, as soon as everyone starts using the 'net the performance starts to suffer and if the load is heavy enough nobody can get anything done and everyone complains. At one point it was so bad that only one of us could use it at a time. I was researching possible solutions to this problem and I heard that internet cafes that utilize 2 internet connections, possibly from different providers, and have some sort of router that allows them to split the traffic between the both of them, with online games going through one and web traffic going through another. Is this possible? What is the technical term for it, and can/should it be applied to a home network setup or is there another solution to this problem?

    Read the article

  • When to use delaycompress option in logrotate?

    - by Anand Chitipothu
    The man page of logrotate says that: It can be used when some program cannot be told to close its logfile and thus might continue writing to the previous log file for some time. I'm confused by this. If a program cannot be told to close its logfile, it will continue to write forever, not for sometime. If the compression is postponed to next rotation cycle, the program continues to write to that file even after the next rotation cycle. How is postponing solving the problem? My understanding is that copytruncate should be used when a program cannot be told to close the logfile. I'm aware that some data written to the logfile gets lost when the copy is in progress. I was looking at the logrotate file for couchdb, and it had both copytruncate and delaycompress options. /usr/local/couchdb-1.0.1/var/log/couchdb/*.log { weekly rotate 10 copytruncate delaycompress compress notifempty missingok } It looks like there is no point using delaycompress when copytruncate is already there. What am I missing?

    Read the article

  • Subversion for web designer: repository on a network share and ftp to the live server?

    - by ceatus
    My configuration: htdocs on a windows network share (z:) web developers check out with dreamweaver modify and check in back to the drive z LAMP running on a Ubuntu server virtualized on Hyper-V with apache that point on the z drive for dev in order to test the websites Upload by FTP on the live server Now: I need multiple access to the repository, keep them on a network shares and we manage about 200 websites. All the web developers, administrators and IT need to access to the share. I found out that creating a svn server is the best way for me, so I created it on a Ubuntu Server which is virtualized on Hyper-V. Right now I have the repos local on the Ubuntu Server but I'd like them on my network drive and I'd like to have a post-commit, if possible, in order to ftp directly on my live server. Do you guys think that a WebDav solution would be better? Thanks in advance Angelo

    Read the article

  • How to install Ubuntu 12.04.1 in EFI mode with Encrypted LVM?

    - by g0lem
    I'm trying to properly install Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with the alternate install CD ".iso" on a lenovo Thinkpad X220. Default Hard Disk (with a pre-installed version of Windows 7) has been replaced with a brand new SSD. The UEFI BIOS of the lenovo Thinkpad X220 is set to "UEFI Boot only" & "USB UEFI BIOS Support" is enabled (I'm using an external USB DVD reader to perform Ubuntu installation). The BIOS is a Phoenix SecureCore Tiano, BIOS version is 8DET56WW (1.26). The attempts below are made with the UEFI BIOS settings described above. Here's what I've tried so far: Boot on a live GParted CD Create a GPT partition table Create a FAT32 partition for UEFI System, set the partition to "EF00" type ("boot" flag) Leave remaining space unformated Boot on Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with alternate CD: Perform the install with network updates enabled Use manual partitioning FAT32 partition created with GParted is used as "EFI System partition" Remaining space is set to be used as "Physical volume for LVM" Then "Configure encrypted volumes" using the previous "Physical volume for LVM" as the encrypted container, passphrase is setup. "Configure the Logical Volume Manager" creating a volume Group using the encrypted container /dev/mapper/sda2_crypt Creation of the Logical Volumes "Create logical volume", choosing the previously created volume Group Assign a mount point and file system to the Logical volumes : LV-root for / LV-var for /var LV-usr for /usr LV-usr-local for /usr/local LV-swap for swap LV-home for /home NOTE: /tmp would be in RAM only using TMPFS Bootloader step: neither my ESP partition (/dev/sda1, /dev/sda or MBR) seems to be the right place for GRUB, I get the following message (X suffix is for demonstration only): unable to install grub in /dev/sdaX Executing 'grub-install /dev/sdaX' failed This is a fatal error. Finish installation without the Bootloader & Reboot The system doesn't start, there's no EFI/GRUB menu at startup. What are the steps to perform a clean and working installation of Ubuntu 12.04.1 Precise Pangolin, 64bit version in U(EFI) mode using the encrypted LUKS + LVM scheme described above?

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • Advise on a 240,000 sqft outdoor wireless network

    - by whlspacedude
    I would be very appreciative of some advice in the purchase of equipment to provide a wireless network that covers the entire area of an outdoor arena. The area is rectangular-ish in shape. 400ft wide and 600ft long. It has 6 light towers, 1 on each of the 400 foot ends and 2 on each of the 600 foot ends. I can mount on anything and spend as much money as needed. The needs of the network would be to provide access for, up to 15 wireless HD cameras with audio, and a public-wifi network. Can someone point me in the right direction as far as equipment and antennas ? I can provide any additional information that you may need.

    Read the article

  • WMII Terminal Width of 80 Columns for xterm (colrules)

    - by BCable
    I'm trying to get WMII to split horizontally at 80 columns for xterm, but I'm only seeing a way to do this via percentage. It would be nice to be able to set it by something other than percentage for various resolutions, but if I have to deal with that I will. The problem is that even percentages don't work at my resolution (1366x768). 47+47 in /colrules yields 79 characters and 48+48 yields 81 characters. As far as I can tell, there is no decimal system allowed so I could do 47.5 for instance. I came from Ion3 and I'm used to using 80 column terminals, resizable by the keyboard, to get a reasonable cut off point for VIM when I'm coding. I would just settle with using the mouse, but WMII seems to be much more fluid than Ion3, so I would have to do it a LOT, which sounds annoying. Any ideas?

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >