Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 285/902 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • [openVPN] server & client on same machine . And multiple VPN servers

    - by HiWorld
    Hello everyone, im stucked configuring openvpn to build a multi vpn connection. like this: CLIENT - VPN1 - VPN2 - INTERNET Well, i already have and know how to done a normal sigle vpn but want to use a chain of vpns, so i explain what i have done and how i did it. ON VPN1. i have 1 openvpn instance running as server( where client connect to) and another as client connecting to VPN2 running as server. { Here comes the problem } when i connect VPN1 as client of VPN2 i cant connect to VPN1 from CLIENT, my question is HOW TO procced with this... Also have another third instance working as server to use VPN1 without chains. ON VPN2. 1 openvpn instance as server where VPN1 will connect and then forward to the NET. Im using TUN interface on configs. And iptables are on this way: VPN1 - openvpn ip server1 : 192.168.6.0 / ip as client of VPN2: 192.168.5.70 iptables -t nat -A POSTROUTING -s 192.168.6.0 -j SNAT --to-source 192.168.5.70 VPN2 - openvpn ip server2 : 192.168.5.0 iptables -t nat -A POSTROUTING -s 192.168.5.0/24 -j SNAT --to-source EXTERNAL_IP_TO_INTERNET Hope someone help me with this. thanks in advance

    Read the article

  • How to redirect a specific url through a proxy for multiple services?

    - by CrystalFire
    I have a website hosted on 000webhost.com for free. I am unable to connect directly to the site because Comcast has blocked a portion of 000webhost's servers for free accounts due to other people hosting malicious content. In order to maintain my website, I cannot use my computer to directly connect to the server. I am wondering if there is a way by which I can specifically forward attempts to access the server through a proxy, transparently. The current system that I am on is Windows, but I also have systems running Mac OSX and Linux, so solutions for any system could be fine. I've found answers which work for http, but I'm looking for a solution which will let me use all the other functions as well, such as ftp and ssh.

    Read the article

  • How to verify multiple properties on an object passed as parameter?

    - by Sandbox
    I want to verify multiple properties on an object passed as parameter. Mock<IInternalDataStore> mockOrder = new Mock<IInternalDataStore>(); I can think of doing it this way. Is this correct? Does a better way exist? mockDataStore.Setup(o => o.PlaceQuickOrder(It.Is<IOrder>(order => order.Id == 1))); mockDataStore.Setup(o => o.PlaceQuickOrder(It.Is<IOrder>(order => order.type == OrderType.Qucik))); mockDataStore.Setup(o => o.PlaceQuickOrder(It.Is<IOrder>(order => order.UnitName == "NYunit"))); mockDataStore.VerifyAll(); Another way of acheiving this would be to create a fake order object, expectedOrderObj with expected properties and do something like this: mockDataStore.Setup(o => o.PlaceQuickOrder(It.Is<IOrder>(order => order == expectedOrderObj ))); But, I don't want to override ==. Do we have a solution for this in moq? My classes look something like this: public interface IInternalDataStore { void PlaceQuickOrder(IOrder order); void PlaceUltraFastOrder(IOrder order); } public interface IOrder { public int Id { get; } public OrderType type { get; set; } public string UnitName { get; set; } } public enum OrderType { Qucik = 1, UltraFast = 2 }

    Read the article

  • More efficient way to update multiple elements in javascript and/or jquery?

    - by Seiverence
    Say I have 6 divs with ID "#first", ""#second" ... "#sixth". Say if I wanted to execute functions on each of those divs, I would set up an array that contains each of the names of the divs I want to update as an element in the array of strings. ["first", "second", "third"] that I want to update. If I wanted to apply I function, I set up a for loop that iterates through each element in the array and say if I wanted to change the background color to red: function updateAllDivsInTheList() { for(var i = 0; i < array.size; i++) $("#"+array[i]).changeCssFunction(); } } Whenever I create a new div, i would add it to the array. The issue is, if I have a large number of divs that need to get updated, say if I wanted to update 1000 out of 1200 divs, it may be a pain/performance tank to have to sequentially iterate through every single element in the array. Is there some alternative more efficient way of updating multiple divs without having to sequentially iterate through every element in an array, maybe with some other more efficient data structure besides array? Or is what I am doing the most efficient way to do it? If can provide some example, that would be great.

    Read the article

  • What is the best tool to aggregate traffic stats from multiple nginx servers?

    - by gekkz
    The setup: 2 or more nginx machines each machine has the same virtual hosts traffic is load balanced via DNS to each machine I need to figure out what are the best tools to use to get some traffic stats, mostly interested in amount of hits and total traffic in gigabytes. Obviously, the log information will come from nginx, formatted like this: log_format main '$remote_addr $host $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"';

    Read the article

  • Why will multiple dvd/cd-rom drives stop reading?

    - by ddc0660
    I'm having difficulty with two dvd/cd drives. When I updated from XP to Vista, I noticed they wouldn't work. I could add cds or dvds to the drives and they would attempt to read the disk, but never "found" a disk in the drive. Vista showed the drives existed without problems. I updated their drivers and nothing changed. I thought perhaps, oddly, both drives were going bad. Then I got Windows 7, and I figured I'd have to do some gymnastics in order to get it onto my system without a working disk drive. However, I can boot from the Windows 7 disk fine! I installed 7 hoping that it was a Vista problem and I'd reclaim my drives, but that is not the case. The same symptoms persist. Thoughts?

    Read the article

  • How can I call multiple nutrient information from ESHA Research API? (apid.esha.com)

    - by user1833044
    I want to call ESHA Research nutrient REST API. I cannot seem to figure out how to call multiple nutrients using ESHA REST API. So far I am calling the following and only able to retrieve the calories, or protein, or another type of nutrient information. So I was hoping someone had experience in retrieving all the nutrient information with one call. Is this possible? This is how I call to retrieve the TWIX nutrient http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da (returns calories, please note the api key is not xxxx but instead a key generated from Esha once you sign up as developer) The return is JSON format. If I want to call fat it would be the following http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da&n=urn:uuid:589294dc-3dcc-4b64-be06-c07e7f65c4bd How can I make a call once and get a return of all the nutrients (so Fat, Calories, Carbs, Vitamins, etc..) for a particular food ID? I have researched and looked at this for a while and cannot seem to find the answer. Thanks in advance for your help.

    Read the article

  • Is there a way to manage browser [chrome] session cookies with a fast toggle for multiple users?

    - by ResoluteHiker
    I feel I should go into more detail. My wife and I share a laptop and browse email. Currently we keep coming back to each others gmail accounts, having to log out and log back in. Are there any good extensions or addons that would allow us to toggle back and forth between these? This does not necessarily need to apply to just gmail but include any cookie, session, etc. I'd be willing to use Firefox if such an extension exists on it as well. Much appreciated!

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Using XML as data storage

    - by Kian Mayne
    I was thinking about the XML format and the following quote: “XML is not a database. It was never meant to be a database. It is never going to be a database. Relational databases are proven technology with more than 20 years of implementation experience. They are solid, stable, useful products. They are not going away. XML is a very useful technology for moving data between different databases or between databases and other programs. However, it is not itself a database. Don't use it like one.“ -Effective XML: 50 Specific Ways to Improve Your XML by Elliotte Rusty Harold (page 230, Part 4, Item 41, 2nd paragraph) This seems to really stress that XML should not be used for data storage and should only be used for program to program interoperability. Personally, I disagree and .NET's app.config file that's used to store a program's settings is an example of data storage in an XML file. However for databases rather than configurations etc XML should not be used. To develop my point, I will use two examples: A) Data about customers with fields that are all on one level i.e. there are a number of fields all relating to one customer with no children B) Data about configuration of an application where nested fields and properties make a lot of sense So my question is, Is this still a valid statement and is it now acceptable to store data using XML? EDIT: I've sent an email to the author of that quote to ask for his input/extra context.

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • [R] Merge multiple data frames - Error in match.names(clabs, names(xi)) : names do not match previou

    - by Jasmine
    Hi all- I'm getting some really bizarre stuff while trying to merge multiple data frames. Help! I need to merge a bunch of data frames by the columns 'RID' and 'VISCODE'. Here is an example of what it looks like: d1 = data.frame(ID = sample(9, 1:100), RID = c(2, 5, 7, 9, 12), VISCODE = rep('bl', 5), value1 = rep(16, 5)) d2 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 7, 7, 7), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value2 = rep(100, 9)) d3 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 9,9,9), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value3 = rep("a", 9), values3.5 = rep("c", 9)) d4 = data.frame(ID =sample(8, 1:100), RID = c(2, 2, 5, 5, 5, 7, 7, 7, 9), VISCODE = c(c('bl', 'm12'), rep(c('bl', 'm06', 'm12'), 2), 'bl'), value4 = rep("b", 9)) dataList = list(d1, d2, d3, d4) I looked at the answers to the question titled "Merge several data.frames into one data.frame with a loop." I used the reduce method suggested there as well as a loop I wrote: try1 = mymerge(dataList) try2 <- Reduce(function(x, y) merge(x, y, all= TRUE, by=c("RID", "VISCODE")), dataList, accumulate=F) where dataList is a list of data frames and mymerge is: mymerge = function(dataList){ L = length(dataList) mdat = dataList[[1]] for(i in 2:L){ mdat = merge(mdat, dataList[[i]], by.x = c("RID", "VISCODE"), by.y = c("RID", "VISCODE"), all = TRUE) } mdat } For my test data and subsets of my real data, both of these work fine and produce exactly the same results. However, when I use larger subsets of my data, they both break down and give me the following error: Error in match.names(clabs, names(xi)) : names do not match previous names. The really weird thing is that using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, ]) As far as I can tell, there is nothing special about row 48 of faq. Likewise, using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:48, ]) Row 48 in faq and row 48 in pdx have the same values for RID and VISCODE, the same value for EXAMDATE (something I'm not matching on) and different values for ID (another thing I'm not matching on). Besides the matching RID and VISCODE, I see anything special about them. They don't share any other variable names. This same scenario occurs elsewhere in the data without problems. To add icing on the complication cake, this doesn't even work: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, 2:3]) where columns 2 and 3 are "RID" and "VISCODE". 48 isn't even the magic number because this works: dataList = list(demog[1:500,], neurobat[1:500,], apoe[1:500,], mmse[1:457,]) while using mmse[1:458, ] fails. I can't seem to come up with test data that causes the problem. Has anyone had this problem before? Any better ideas on how to merge? Thanks for your help! Jasmine

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Does .NET have a built in IEnumerable for multiple collections?

    - by Bryce Wagner
    I need an easy way to iterate over multiple collections without actually merging them, and I couldn't find anything built into .NET that looks like it does that. It feels like this should be a somewhat common situation. I don't want to reinvent the wheel. Is there anything built in that does something like this: public class MultiCollectionEnumerable<T> : IEnumerable<T> { private MultiCollectionEnumerator<T> enumerator; public MultiCollectionEnumerable(params IEnumerable<T>[] collections) { enumerator = new MultiCollectionEnumerator<T>(collections); } public IEnumerator<T> GetEnumerator() { enumerator.Reset(); return enumerator; } IEnumerator IEnumerable.GetEnumerator() { enumerator.Reset(); return enumerator; } private class MultiCollectionEnumerator<T> : IEnumerator<T> { private IEnumerable<T>[] collections; private int currentIndex; private IEnumerator<T> currentEnumerator; public MultiCollectionEnumerator(IEnumerable<T>[] collections) { this.collections = collections; this.currentIndex = -1; } public T Current { get { if (currentEnumerator != null) return currentEnumerator.Current; else return default(T); } } public void Dispose() { if (currentEnumerator != null) currentEnumerator.Dispose(); } object IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (currentIndex >= collections.Length) return false; if (currentIndex < 0) { currentIndex = 0; if (collections.Length > 0) currentEnumerator = collections[0].GetEnumerator(); else return false; } while (!currentEnumerator.MoveNext()) { currentEnumerator.Dispose(); currentEnumerator = null; currentIndex++; if (currentIndex >= collections.Length) return false; currentEnumerator = collections[currentIndex].GetEnumerator(); } return true; } public void Reset() { if (currentEnumerator != null) { currentEnumerator.Dispose(); currentEnumerator = null; } this.currentIndex = -1; } } }

    Read the article

  • How to use multiple flatpages models in a django app?

    - by the_drow
    I have multiple models that can be converted to flatpages but have to have some extra information (For example I have an about us page but I also have a blog). However I understand that there must be only one flatpages model since the middleware only returns the flatpages instance and does not resolve the child models. What do I have to do? EDIT: It seems I need to change the views. Here's the current code: from django.contrib.flatpages.models import FlatPage from django.template import loader, RequestContext from django.shortcuts import get_object_or_404 from django.http import HttpResponse, HttpResponseRedirect from django.conf import settings from django.core.xheaders import populate_xheaders from django.utils.safestring import mark_safe from django.views.decorators.csrf import csrf_protect DEFAULT_TEMPLATE = 'flatpages/default.html' # This view is called from FlatpageFallbackMiddleware.process_response # when a 404 is raised, which often means CsrfViewMiddleware.process_view # has not been called even if CsrfViewMiddleware is installed. So we need # to use @csrf_protect, in case the template needs {% csrf_token %}. # However, we can't just wrap this view; if no matching flatpage exists, # or a redirect is required for authentication, the 404 needs to be returned # without any CSRF checks. Therefore, we only # CSRF protect the internal implementation. def flatpage(request, url): """ Public interface to the flat page view. Models: `flatpages.flatpages` Templates: Uses the template defined by the ``template_name`` field, or `flatpages/default.html` if template_name is not defined. Context: flatpage `flatpages.flatpages` object """ if not url.endswith('/') and settings.APPEND_SLASH: return HttpResponseRedirect("%s/" % request.path) if not url.startswith('/'): url = "/" + url # Here instead of getting the flat page it needs to find if it has a page with a child model. f = get_object_or_404(FlatPage, url__exact=url, sites__id__exact=settings.SITE_ID) return render_flatpage(request, f) @csrf_protect def render_flatpage(request, f): """ Internal interface to the flat page view. """ # If registration is required for accessing this page, and the user isn't # logged in, redirect to the login page. if f.registration_required and not request.user.is_authenticated(): from django.contrib.auth.views import redirect_to_login return redirect_to_login(request.path) if f.template_name: t = loader.select_template((f.template_name, DEFAULT_TEMPLATE)) else: t = loader.get_template(DEFAULT_TEMPLATE) # To avoid having to always use the "|safe" filter in flatpage templates, # mark the title and content as already safe (since they are raw HTML # content in the first place). f.title = mark_safe(f.title) f.content = mark_safe(f.content) # Here I need to be able to configure what I am passing in the context c = RequestContext(request, { 'flatpage': f, }) response = HttpResponse(t.render(c)) populate_xheaders(request, response, FlatPage, f.id) return response

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • Maven-ear-plugin - excluding multiple modules i.e. jars, wars etc.

    - by James Murphy
    I've been using the Maven EAR plugin for creating my ear files for a new project. I noticed in the plugin documentation you can specify exclude statements for modules. For example the configuration for my plugin is as follows... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <version>2.4.1</version> <configuration> <jboss> <version>5</version> </jboss> <modules> <!-- Include the templatecontroller.jar inside the ear --> <jarModule> <groupId>com.kewill.kdm</groupId> <artifactId>templatecontroller</artifactId> <bundleFileName>templatecontroller.jar</bundleFileName> <includeInApplicationXml>true</includeInApplicationXml> </jarModule> <!-- Exclude the following classes from the ear --> <jarModule> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <excluded>true</excluded> </jarModule> <jarModule> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <excluded>true</excluded> </jarModule> ... declare multiple excludes <security> <security-role id="SecurityRole_1234"> <role-name>admin</role-name> </security-role> </security> </configuration> </plugin> </plugins> This approach is absolutely fine with small projects where you have say 4-5 modules to exclude. However, in my project I have 30+ and we've only just started the project so as it expands this is likely to grow. Besides explicitly declaring exclude statements per module is it possible to use wildcards or and exclude all maven dependencies flag to only include those modules i declare and exclude everything else? Is anyone aware of a more elegant solution?

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >