Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 41/1038 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Cost of exception handlers in Python

    - by Thilo
    In another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?

    Read the article

  • WCF - Network Cost

    - by Mubashar Ahmad
    Dear Devs I have a wcf service deployed on IIS with basicHttpBinding and aspNetCompatibilityEnabled=true I have a test client as well which invokes multiple service functions simultaneously. To check the performance of service call on client and server I calculated the Avg time it takes to complete a service request on client(in proxy code) and on server as well. after a test of 8 hrs (server and client were on the same machine) i came to know that average response time on client is around 34ms where as the Avg execution time on server is around 3ms so the difference is 31ms. I would like to know why every call is taking 31ms is it justified? and how can i reduce this?

    Read the article

  • DNN 5.0 Modules available for Free or less cost

    - by Harryboy
    I am just planning to work on CMS with some features like blog, forum, chat, broadcast, video, Document management, poll, dashboard, advertisement, alerts & reminder, events, task etc.. I am thinking of using DNN for development as i feel most of the modules are available so i just need to do the customization. Also suggest what all are the modules available in core DNN (Community version) is anybody is having list what all are the modules available for DNN 5.0 ? Please provide the link for the same.

    Read the article

  • ColdFusion speed cost of FileExists

    - by davidosomething
    I want to: on every page, check if a file exists include that file if TRUE i.e.: <cfset variables.includes.header = ExpandPath("_inc_header.cfm")> <cfif FileExists(variables.includes.header)> <cfinclude template = "#variables.includes.header#"> </cfif> Is this a good idea?

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Renting an "EC2" server VS buying one (for a start up in initial stages)

    - by krish p
    We are a small start up in the early stages and are working on a SaaS-based Rails product. Currently, we use EC2 for a small instance and have a need for another large/extra-large instance as we are beginning to deploy to the Cloud and get ready to release our "alpha" version. While EC2 was my choice for numerous reasons (reliability, accessibility - small team is geographically dispersed, maintainability, and things of that nature), it appears to be rather expensive. While the product will ultimately be deployed in the Cloud (be it EC2 or otherwise) and that experience would help the development team, would it make sense to purchase a physical server and stick it in the basement or bite the bullet and pay the price for EC2 (or other Cloud Providers)? While such decisions are driven by numerous factors, it would certainly help to get the thoughts of other folks who may have been in similar situations. Hence, the post. Thanks much!

    Read the article

  • How much does Website Development cost nowadays?

    - by Andreas Grech
    I am thinking of setting up my own freelance business but coming from a workplace that offers a particular service to huge clients, I do not know what are the current charges for websites are nowadays. I know that as technology just keeps changing and changing (most of the time, for the better...), the amount you charge for a single website is constantly differing. Like for example, I don't think static websites (with just static html pages) are that expensive today, no? (as i said, I might be mistaken since I haven't really touched on this freelance industry yet) So, freelance web-developers out there, can you give me estimates on how much you charge for your clients? Some examples of websites that I want to know an approx charge: ~10 static html pages ~10 dhtml pages (with maybe a flasy menu on the top/side) Database driven websites with a standard CMS (be it the one you developed, or an existing one) Database driven but with a custom-built cms for the particular client Using an existing template for a design Starting the design from scratch etc... I know that the normally clients don't really care about the technologies used to construct their websites, but do you charge differently according to which technology you use to build the website with?; as in, is the technology a factor when setting the price? ...being ASP.Net, PHP, Ruby On Rails etc... Also, how do you go on about charging your clients for your services? What are the major factors that you consider when setting a price tag for a website to a client ? And better yet, how do you even find prospective clients? <= [or should I leave this question for a different post?] Btw, in your post, also mention some numbers (in cash values, be it in USD, GBP, EUR or anything) because I want to be able to take calculate some averages from this post when some answers stack up

    Read the article

  • Inline function and calling cost in C

    - by Eonil
    I'm making a vector/matrix library. (GCC, ARM NEON, iPhone) typedef struct{ float v[4]; } Vector; typedef struct{ Vector v[4]; } Matrix; I passed struct data as pointer to avoid performance degrade from data copying when calling function. So I thought designed function like this: void makeTranslation(const Vector* factor, Matrix* restrict result); But, if function is inline, is there any reason to pass values as pointer for performance? Do those variables copied too? How about register and caches? inline Matrix makeTranslation(Vector factor) __attribute__ ((always_inline)); How do you think about calling costs of each cases?

    Read the article

  • Is DevTeach worth the cost?

    - by Mafuba
    Wondering if people who have attended DevTeach could comment on the value of the conference (e.g. well-organized, good sessions, good material, etc.). This post seems to indicate that it is not worth the money.

    Read the article

  • What is the cost of object creating.

    - by Tony
    Hi If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance? for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like. new Customer().GetData(); how far this can be bad? Thanks

    Read the article

  • Upcoming events : OBUG Connect Conference 2012

    - by Maria Colgan
    The Oracle Benelux User Group (OBUG) have given me an amazing opportunity to present a one day Optimizer workshop at their annual Connect Conference in Maastricht on April 24th. The workshop will run as one of the parallel tracks at the conference and consists of three 45 minute sessions. Each session can be attended stand alone but they will build on each other to allow someone new to the Oracle Optimizer or SQL tuning to come away from the conference with a better understanding of how the Optimizer works and what techniques they should deploy to tune their SQL. Below is a brief description of each of the sessions Session 7 - 11:30 am Oracle Optimizer: Understanding Optimizer StatisticsThe workshop opens with a discussion on Optimizer statistics and the features introduced in Oracle Database 11g to improve the quality and efficiency of statistics-gathering. The session will also provide strategies for managing statistics in various database environments. Session 27 -  14:30 pm Oracle Optimizer: Explain the Explain PlanThe workshop will continue with a detailed examination of the different aspects of an execution plan, from selectivity to parallel execution, and explains what information you should be gleaning from the plan. Session 47 -  15:45 pm Top Tips to get Optimal Execution Plans Finally I will show you how to identify and resolving the most common SQL execution performance problems, such as poor cardinality estimations, bind peeking issues, and selecting the wrong access method.   Hopefully I will see you there! +Maria Colgan

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

  • Web-based SVG or JavaScript Org Chart or Tree Graph Plotting Visualization API

    - by asoltys
    Hi, I'm looking to build an interactive web-based org chart for a large organization. I somewhat like the interface at ancestry.com where you can hover over people and pan/zoom around and click on different nodes to make them the root. Ideally, I'd like it if people could belong to multiple organizational entities like committees, working groups, etc. In other words the API should support graphs in general, not just trees. I'd like to be able to visually explode each organizational substructure into substituents by clicking on it, with a nice animation of the employees ballooning or spilling out so you can really interactively drill down through the organization. I found http://code.google.com/apis/visualization/documentation/gallery/orgchart.html but it looks a bit rudimentary. I know there are desktop tools like OrgPlus and Visio that can build static charts but I'm really looking for a free, web-based API with open standards-based output like SVG or HTML5 Canvas elements rather than Flash or some proprietary output. Something I can embed into a custom web application and style myself. Something interactive.

    Read the article

  • Is browser based wireless authentication secure?

    - by johnnyb10
    Our wireless network previously used a preshared WPA/WPA2 key for guest access, which allows them access to the Internet. (Our employee access uses 802.1x authentication). We just had a wireless consultant come in to fix various wireless issues we had; one of the things he wound up doing was changing our guest access to HTML-based instead of the preshared key. So now that guest SSID is open (instead of using WPA) and users are presented with a browser-based login screen before they can get on the Internet. My question is: Is this an acceptable method from a security standpoint? I would assume that having an open network is necessarily a bad idea, but the consultant said that the traffic is still using PEAP, so it's secure. I didn't get a chance to question him further on this because we ran late and a bunch of other things came up. Please let me know what you think about the advantages/disadvantages of using HTML-based wireless authentication as opposed to using a preshared WPA key. Thanks...

    Read the article

  • IPTables Reroute SSH based on Connection string?

    - by senrabdet
    We are using a cloud server (Debian Squeeze) where public ports on a public IP route traffic to internal servers. We are looking for a way to use IPTables and ssh where based on some part of the ssh connection string (or something along these lines) iptables will reroute the ssh connection to the "right" internal server. This would allow us to use one common public port, and then re-route ssh connections to individual servers. So, for example we hope to do something like the following: user issues ssh connection (public key encryption) such as ssh -X -v -p xxx [email protected] but maybe adds something into the string for iptables to use iptables uses some part of that string or some means to re-route the connection to an internal server using something like iptables -t nat -A PREROUTING ! -s xxx.xxx.xxx.0/24 -m tcp -p tcp --dport $EXTPORT -j DNAT --to-destination $HOST:$INTPORT ....where $HOST is the internal ip of a server, $EXTPORT is the common public facing port and $INTPORT is the internal server port. It appears that the "string" aspect of iptables does not do what we want. We can currently route based on the IP table syntax we're using, but rely on having a separate public port for each server and are hoping to use one common public port and then re-route to specific internal servers based on some part of the ssh connection string or some other means. Any suggestions? Thanks!

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • SysAdmin Career Question: Internal or Client Based

    - by Malnizzle
    ServerFault Community, It seems there are two positions SysAdmins find themselves in, either you are working for a non-IT services based single client (your employer) and providing in-house IT support or you work for a company who provides out sourced IT services to multiple clients. Right now I work for a company who does the latter, and I often consider how nice it would be doing the in-house side of things, to just have one network I am focused on and instead of feeling like I have a dozen bosses between clients and internal management, I would just have one set of management and people to appease. There is also the technical aspect of every client wanting something different, and having to manage numerous different technology platforms, or trying to force clients into using the technologies we prefer, neither situation is enjoyable. Is this just "the grass is greener on the other side" syndrome, or is there some legitimacy to the the stress of client based IT work compared to being an in-house IT guy? Thanks!

    Read the article

  • Installing an EC2 (EBS based) instance on another AWS account

    - by imaginative
    We're moving from one AWS account to another. I have a couple of EC2 instances I would like to move over. These instances are EBS based. Googling around, I'm seeing all sorts of convoluted answers for how to appropriately launch this EBS based instance on a new account. Surely there has to be a simple way of going about this. As of right now, what I have done so far is backup the EB2 Volume as a snapshot. I then altered the permissions of the snapshot to allow the new AWS account number to be able to access it. I am not sure where to go from here. Do I have to create an image from the EBS snapshot? Upload it to S3? What's the next step(s)?

    Read the article

  • Prebuilt ActiveMQ Server Based off ZeroMQ

    - by VxJasonxV
    Are there any distributions of fully built Message Brokers that are initially based off of ZeroMQ? I had thought that downloading/installing ZeroMQ would give me such, not just a handful of procedures for rolling me own. Currently we use ActiveMQ, but it is a miserable pain to configure, so I'd rather slim down the profile, unfortunately I learned that ZeroMQ was not a one step solution to achieving that goal. Alternatives are ok, but I'd prefer something less overly verbose in configuration than ActiveMQ's ludicrous amounts of Java configuration. Broker config + Java Logger Config + many other intricacies that I don't wish to deal with. (Read: Preferably not Java based in the first place.) I'm looking for reliable, basic functionality described by JMS brokers. Topics, Queues, Message Persistence, etc.

    Read the article

  • Routing based on source address in Windows Server 2008 R2

    - by rocku
    Hi, I'm implementing a direct routing load balanced solution using Windows Server 2008 R2 as back-end server. I've configured a loopback interface with the external IP address. This works, I am receiving packets with the external IP address and respond to them appropriately. However our infrastructure requires that traffic which is being load-balanced should go through a different gateway then any other traffic originating from the server, ie. updates etc. So basicly I need to route packets based on source address (external IP) to another gateway. The built-in Windows 'route' command allows routing based on destination address only. I've tried setting a default gateway on the loopback interface and mangled with weak/strong host send/receive parameters on the interfaces, however this didn't work. Is there any way around this, possibly using third party tools?

    Read the article

  • Web based library management (FOSS) application

    - by pgb
    I'm looking for a FOSS web-based library management application. I want to manage the few books we have at our company, and I'm looking for applications that provide a simple web interface to: Add new books. Search for books. (Not Required) Place holds on books. My ideal would be something like Delicious Library, but web based and open source. I've looked at Koha (which I could not install), Emilda, which seems to be a dead project (last emails are from 2006 and I could not install it) and VuFind which looks awesome but doesn't provide (or I could not find) an interface to add new records to the library. Any suggestions?

    Read the article

  • Ngnix as reverse proxy for Apache name-based vhosts

    - by Ben Carleton
    I am running several websites on Apache currently utilizing name-based vhosts. All of the sites are on the same server. I would like to add Ngnix on a new server to sit in front of Apache as a caching reverse proxy. What is the best way to handle the multiple name-based vhosts? Should I simply have Nginx handle the names and run each Apache vhost on a separate port? Or is there a way to just have Nginx pass the hostname to Apache and have apache take care of the domain names?

    Read the article

  • Reputable web based ssh client? [closed]

    - by Doug T.
    I'm connected to a coffee shop's wireless network right now, and I suspected I'd be able to use my laptop and ssh somewhere. Unlucky me they seem to be blocking everything but web traffic (my testing seems to show everything but port 80 is working, can't ping, ftp, etc). I googled "web based ssh clients" however I have reservations about entering my login credentials on any Joe Schmoe's web app. I was wondering if anyone has had any experience with any reputable web based ssh clients? If so could you please point me at one that I could trust?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >