Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 613/1048 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Looking for bug tracking software

    - by Shelton
    I'm looking for a bug/issue tracking system that can: Integrate with lots of other services. Basecamp, Beanstalk, etc. Integrate popular CMSs, such as WordPress, so the client can enter a ticket from the system that is familiar to them and not have one more login to worry about. Generate reports for my own purposes. Bonus if there's an iPhone app. Doesn't require additional development on my end (I have plenty of money and no time). I've already looked into Lighthouse and ZenDesk -- both are solid offerings -- but don't see what I need out of the box. I'd have to build CMS plug-ins. And I've looked through WP plug-ins for bug tracking software, but nothing I've found integrates with these products. Anyone know of something that meets these requirements without additional development, or am I stuck putting my business on hold to get this piece in place myself?

    Read the article

  • Speed up ssh login using public key down to 0.1sec

    - by BarsMonster
    Hi! I am using Putty to login to my local server, but it takes about 1.5 seconds to login (from the click on 'connect' to working command prompt, most of time is spend on "Authenticating with public key..."). I know many see even slower speeds, but I would like to have not more than 0.1 login time. I already set UseDNS=no and allowed only IPv4 in putty client and reduced key length from 4k down to 1k. Any other suggestions to speed it further?

    Read the article

  • MVC 2 in 2 Minutes!

    - by Steve Michelotti
    In a couple of recent Code Camps, I’ve given my presentation: Top 10 Ways MVC 2 Will Boost Your Productivity. In the presentation, I cover all major new features introduced in MVC 2 with a focus on productivity enhancements. To drive the point home, I conclude with a final demo where I build a couple of screens from scratch highlighting many (but not all) of the features previously covered in the talk. A couple of weeks ago, I was asked to make it available online so here it is. In 2 minutes the demo builds a couple screens from scratch that provide a goal setting tracker for a user. MVC 2 features included in the video are: Template Helpers / Editor Templates Server-side/Client-side Validation Model Metadata for View Model HTML Encoding Syntax Dependency Injection Abstract Controllers Custom T4 Templates Custom MVC Visual Studio 2010 Code Snippets The complete code samples and slide deck can be downloaded here: Top 10 Ways MVC 2 Will Boost Your Productivity. Enjoy! (Right-click and Zoom to view in full screen)   Click here for Direct link to video

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    This document contains step-by-step instructions for installing and testing the Microsoft Business Intelligence infrastructure based on SQL Server 2012 and SharePoint 2010, focused on SQL Server 2012 Reporting Services with Power View. This document describes how to completely install the following scenarios: a standalone instance of SharePoint and Power View with all required components; a new SharePoint farm with the Power View infrastructure; a server with the Power View infrastructure joined to an existing SharePoint farm; installation on a separate computer of client tools; installation of a tabular instance of Analysis Services on a separate instance; and configuration of single sign-on access for double-hop scenarios with and without Kerberos. Scripts are provided for all/most scenarios.

    Read the article

  • Web Page Execution Internals

    - by octopusgrabbus
    My question is what is the subject area that covers web page execution/loading. I am looking to purchase a book by subject area that covers when things execute or load in a web page, whether it's straight html, html and Javascript, or a PHP page. Is that topic covered by a detailed html book, or should I expect to find information like that in a JavaScript of PHP book? I understand that PHP and Perl execute on the server and that Javascript is client side, and I know there is a lot of on-line documentation describing <html>, <head>, <body>, and so on. I'm just wondering what subject area a book would be in to cover all that, not a discussion of the best book or someone's favorite book, but the subject area.

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Looking for the better way to combine deep architecture refactoring with feature based development

    - by voroninp
    Problem statement: Given: TFS as Source Control Heavy desktop client application with tons of legacy code with bad or almost absent architecture design. Clients constantly requiring new features with sound quality, fast delivery and constantly complaining on user unfriendly UI. Problem: Application undoubtedly requires deep refactoring. This process inevitably makes application unstable and dedicated stabilization phase is needed. We've tried: Refactoring in master with periodical merges from master (MB) to feature branch (FB). (my mistake) Result: Many unstable branches. What we are advised: Create additional branch for refactoring (RB) periodically synchronizing it with MB via merge from MB to RB. After RB is stabilized we substitute master with RB and create new branch for further refactoring. This is the plan. But here I expect the real hell of merging MB to RB after merging any FB to MB. The main advantage: Stable master most of the time. Are there any better alternatives to the procees?

    Read the article

  • Google : nouvelles applications Gmail pour Android et iOS et nouveau Youtube pour les iDevices d'Apple

    Gmail : nouvelles applications pour Android et iOS Et nouveau Youtube pour les iDevices d'Apple La fragmentation touche également les applications de Google pour son propre OS mobile. Son nouveau Gmail pour Android - qui vient de sortir - n'est en effet disponible que pour les versions 4.0 et ultérieures (aujourd'hui la 4.1) du système. Parmi les améliorations du client de messagerie mobile, on notera pêle-mêle : un nouvel aperçu des photos dans les mails, la possibilité de mettre une vidéo ou une photo directement en pièce jointe, l'ajustement automatique de la taille de la police à l'écran ou le glisser des messages vers la droite pour les trier ou les supprimer. ...

    Read the article

  • Syncing properties across a game server

    - by Vaughan Hilts
    I'm beginning to implement a simple scripting system into my networked server, and I've hit a snag. Before, I've been wrapping my calls into functions on objects that manipulate objects, but lately I've been finding this to be a pain for simple things. For example, if I set 'player.HP = 1'.. this works server-side. But the player side never sees this change unless I explicitly send a packet to inform the client. For many things like map swapping that require more complicated changes, like change X, Y, Map and do this.. I have a function. That's fine. But what about these small properties I want to sync?

    Read the article

  • How can IIS 7.5 have the error pages for a site reset to the default configuration?

    - by Sn3akyP3t3
    A mishap occurred with web.config to accommodate a subsite existing. I made use of “<location path="." inheritInChildApplications="false">”. Essentially it was a workaround put in place for nested web.config files which was causing a conflict. The result was that error pages were not being handled properly. Error 500 was being passed to the client for every type of error encountered. Removal of the offending inheritInChildApplications tag from the root web.config restored normal operations of most of the error handling, but for some reason error 503 is a correct response header, but the IIS server is performing the custom actions for error 403.4 which is a redirect to https. I'm looking to restore defaults for error pages so that the behavior once again is restored. I then can re-add customizations for the error pages.

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • What source control to use for my private gaming server?h

    - by crosenblum
    It has sql server components, client launcher, server software. I want to use an online resource where people can make updates, and make it easier to roll out any changes to players. Most of the files are just text files, or gtx image files. I don't think this qualifies as open source, so I don't know what to do. I tried github, and have a free account there, but it was really clunky, mass adding every file to be comitted. I really dont' like subversion but if that's the best option, i'll use it. The other people who will need access to the files will have no familiarity with any kind of source control, so I need an easy system for them to download files, make changes, and comit to the repository. Any suggestions?

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • What is the best practice for website design and markup now that mobile browsers are common?

    - by Jonathan Drain
    Back in 2008, smartphones were a small market and it was commonplace for sites to be designed for a fixed width - say, 900px or 960px - with the page centered if the browser window was larger. Many designers said fluid width was better, but since user screens typically varied between 1024x768 and 1920x1080, fluid width allowed longer line length than is optimal for ease of reading, and so many sites (including Stack Exchange) use fixed width. Now that mobile devices are common, what is the the best approach to support both desktop and mobile browsers? Establish a separate mobile site (e.g: mobile.example.com) Serve a different CSS to mobile devices; if so how? Server-side browser sniffing, or a @media rule? Use Javascript or something to adapt the website dynamically to the client? Should all websites be expected to be responsive? Some kind of fluid layout Something else?

    Read the article

  • Ping one remote server from another remote server

    - by user666254
    It's simple to ping a server in C#, but suppose I have servers A, B and C. A connects to B. A asks B to ping C, to check that B can talk to C. A needs to read the outcome. Now, first of all is this possible without installing an application onto B? In other words, can I perform the entire check from just running a program on A? If so, can anyone suggest the route I would take to achieve this? I've looked at sockets but from the examples I've seen these require a client AND server application to function.

    Read the article

  • Configuring SQL Server Express Edition for remote access

    - by rohancragg
    Originally posted on: http://geekswithblogs.net/rohancragg/archive/2013/07/24/configuring-sql-server-express-edition-for-remote-access.aspxI wanted to access SQL Express on my local machine from within a Client Hyper=V virtual machine on the same Domain. This article got me most of the way there: http://akawn.com/blog/2012/01/configuring-sql-server-2008-r2-express-edition-for-remote-access/ But it was a bit out of date. My steps were: Enable TCP/IP Protocol in SNAC Restart SQL Server Configure (Windows 8) Firewall to allow all Inbound for sqlservr.exe Footnote: I thought this might be relevant (nice to be able to script it): http://support.microsoft.com/kb/968872/en-us But the problem is that this is for fixed ports and not compatible with the (default) Dynamic Ports settings above.

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

  • Meaning of the free space indication in Deluge

    - by Tjae Beamon
    Recently I installed Ubuntu 12.0.4 using Wubi with my current Windows Vista. I have already installed all the 265 updates from the Ubuntu software center and downloaded Deluge from there. My hardrive is 80GB according to the disc usage analyzer. It also says 31.2 GB used and 47.8GB free. The confusion comes when I run Deluge. At the bottom it says 2.0GB free space. Is that 2.0GB just a size set from the torrent client and can be changed or am I limited to just that 2.0GB?

    Read the article

  • Force install apt-get

    - by Web Developer
    I tried installing beanstalkd with sudo apt-get install beanstalkd (also with -f option) and I get the following error: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: beanstalkd: Depends: libevent-1.4-2 (>= 1.4.13-stable) but it is not going to be installed mysql-server-5.1: Depends: mysql-client-5.1 (>= 5.1.62-0ubuntu0.10.04.1) but it is not going to be installed Depends: libmysqlclient16 (>= 5.1.21-1) but it is not going to be installed Depends: mysql-server-core-5.1 (>= 5.1.62-0ubuntu0.10.04.1) but it is not going to be installed PreDepends: mysql-common (>= 5.1.62-0ubuntu0.10.04.1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • Does anyone know how to "tcpdump" traffic decrypted by Mallory MITM? [migrated]

    - by chriv
    I'm looking for some help in capturing network traffic that I can analyze in Wireshare (or other tools). The tool I'm using is mallory. If anyone is familiar with mallory, I could use some help. I've got it configured and running correctly, but I don't know how to get the output that I want. The setup is on my private network. I have a VM (running Ubuntu 12.04 - precise) with two NICs: eth0 is on my "real" network eth1 is only on my "fake" network, and is using dnsmasq (for DNS and DHCP for other devices on the "fake" network) Effectively eth0 is the "WAN" on my VM, and eth1 is the "LAN" on my VM. I've setup mallory and iptables to intercept, decrypt, encrypt and rewrite all traffic coming in on destination port 443 on eth1. On the device I want intercepted, I have imported the ca.cer that mallory generated as a trusted root certificate. I need to analyze some strange behavior in the HTTPS stream between the client and server, so that's why mallory is setup in between for this MITM. I would like to take the decrypted HTTPS traffic and dump it to either a logfile or a socket in a format compatible with tcpdump/wireshark (so I can collect it later and analyze it). Running tcpdump on eth1 is too soon (it's encrypted), and running tcpdump on eth2 is too late (it's been re-encrypted). Is there a way to make mallory "tcpdump" the decrypted traffic (in both directions)?

    Read the article

  • How to support tableless columns with WYSIWYG editor?

    - by Andy
    On the front page of a site I'm working on there's a small slideshow. It's not for pictures in particular, any content can go in, and I'm currently setting up the editing interface for the client. I'd like to be able to have one/two/more columns in the editable area, and ideally that would be via CSS - does anyone know of a WYSIWYG editor that supports this? I'm using Drupal (would prefer not to involve Panels as it would require a bit of work to make it a streamlined workflow for content entry) in case that matters to anyone. To start the ball rolling, one way would be to use templates. I know CKEditor supports templates, and it looks like TinyMCE might have something similar. I don't know how well these work with tableless columns (the CKEditor homepage demo uses tables to achieve its two column effect). Holding out for a cool solution!

    Read the article

  • Can I share my cable internet connection through my ADSL wireless router?

    - by Roaders
    Hi All Over the xmas period I am at my in-laws. They have Virgin Broadband (cable) and have a basic modem / router that is plugged directly into their computer using an ethernet cable. My wife and I arrived with 5 PCs! (ok, one is a gift and won't be used) 4 of which are laptops so I would like to be able to use their internet connection. At the moment I am working so have plugged the ethernet cable into my work laptop. Rebooting the router meant that my work laptop now has internet. I have my ADSL Netgear router which is wireless. I tried plugging it in between the router and the PC but I didn't seem to be able to share the internet connection wirelessly. The original PC still had internet despite ony being connected to my router but my wireless laptop didn't have a connection. My old cable router had an internet ethernet port on the back that the modem plugged into. My ADSL router doesn't, it has a phone connection socket. Is there a way of doing what I want with the equipment I have? Thanks

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >