Search Results

Search found 40998 results on 1640 pages for 'project setup'.

Page 476/1640 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Need recommendation for transferring ASP.NET MVC skills to PHP

    - by Tuck
    I am looking to translate my skills in .NET to PHP - specifically in regards to ASP.NET MVC. At work I am currently using .NET MVC 2.0 on a variety of projects and thoroughly enjoy the platform. Specifically I enjoy the very minimal configuration required to get a project up and running (just create the project, define routes, and start coding), as well as the ability for controller actions to return different items (i.e. ActionResult, JsonResult). Another piece I really like is the way the view/model interaction can be handled. For example I like being able to call return View(model) and having a view page (.aspx) load and having the full model object available to the view, regardless of the model type. I'm looking for a PHP implementation of MVC that is the most similiar to what I am already familiar with. I don't anything apart from the MVC functionality. I've looked at Zend, Symfony, CodeIgniter, etc. and, while they look like they'll be fun to play with in the future, they provide much more functionality than I need. I'd prefer to write my own DAL, form helpers, delegate handlers, authentication/ACL pieces, etc. In short, I just need something to handle the routing and view interactions and will worry about the model implementation myself. Can someone please point me to some lightweight code that accomplishes or comes close to accomplishing my objectives above. Or, can someone identify just the portions of a larger framework that do the same (again, I'm not currently interested in implementing something on a big framework, just the MVC portion and want to implement the model portion myself as much as possible). Thanks in advance.

    Read the article

  • FoxTales: Behind the Scenes at Fox Software by Kerry Nietz

    Flash backs from the past! It's truly amazing to discover that software development from freshman to senior level as well as project management hasn't changed that much. Kerry Nietz describes his memoir from his final year at college to his first job at Fox Software to 'an early retirement' at Microsoft. This title also brought his other fictional novels to my attention. Once again here is the review I published on Amazon: Built to last! I have been around in software development for more than a decade now but honestly I have to admit it is only now that I took the opportunity to read about the history of my used to be primary programming language. In fact, I started with Visual FoxPro 6 back in 1999 and went only down to FoxPro for Windows 2.6 during migration projects - long after the stories described in this title. It is really interesting to see how they actually managed to create a great product with such a small team of developers. "Create the best Report Writer in the world, out of only sawdust, bubblegum, and dreams." - That's the best sentence I'm going to quote from this title in the future. An inspiration to achieve the impossible, only by taking small steps. Just begin the journey - one step after the next one. If you fall, stand up and continue to walk. Kerry takes the reader on an amazing trip through almost 4 years working at a small software company in Perrysburg, Ohio. That went from a another 'look-alike' of the mighty Ashton-Tate dBase to the leading force in database development, long before Microsoft Access (project name: Cirrus) was even finished. It survived Borland Paradox and even nowadays Visual FoxPro is still in daily use in thousands of companies world-wide. Actually, I'm glad that I had the chance to foster my programming knowledge with Visual FoxPro. After his excellent work in software development, Kerry went for a second career as a writer. I'm looking forward to read his other titles soon:

    Read the article

  • What design patterns are the worst or most narrowly defined?

    - by Akku
    For every programming project, Managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalbale solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around: Which design patterns did you use, that threw you back overall? Which are the worst design patterns, that you shouldn't have a look at if it's not that only single situation where it makes sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of amazon to see what bugged people most in using design patterns). And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as "good" patterns. Edit: As some answered, the problem is most often that patterns are not "bad" but "used wrong". If you know patterns, that are often misused or even difficult to use, they would also fit as an answer.

    Read the article

  • hosting environment for delivering FLVs

    - by Gotys
    What would be the ideal hardware setup for pushing lots of bandwith on a tube site? We have ever-expanding cloud storage where users upload the movies, then we have these web-delivery machines which cache the FLV files on its local harddrives and deliver them to users. Each cache machine can deliver 1200 mbits/s , if it has SAS 8 harddrives. Such a cache machine costs us $550/month for 8x160gb -- so each machine can cache only 160GB at any given time. If we want to cache more then 160gb , we need to add another machine..another $550/month..etc. This is very un-economical so I am wondering if we have any experts here who can figure out a better setup. I've been looking into "gluster FS", but I am not sure if this thing can push a lot of bandwith. Any ideas highly appreciated. Thank you!

    Read the article

  • SQL 2008 Mirroring, how to failover from the mirror database?

    - by Luis
    I have configured a database mirroring setup in SQL 2008 using the High-safety, Synchronous mode, without automatic failover. I don't have a Witness instance. Regarding high availability, I understand Mirroring is a better strategy than Log Shipping (faster and smoother failover), and cheaper than Clustering (because of license and hardware costs). According to the MS docs, to do the failover you need to access to the Principal database and in the "Mirror" options click the "Failover" button. But I want to do this from the Mirror database, because what would be the benefit as all this setup is being done in case the Principal server knocks down? Evidently I am missing something. If Mirroring is not a solution for server downtime (as would be Clustering, if I understand correctly), then which practical (i.e. real world examples) cases would benefit from Mirroring for high-availability purposes? Thank you very much for your response! I really need some enlightment.

    Read the article

  • Advise on VMWare hardware requirements and host OS

    - by edwin.nathaniel
    Hi All, I'm a newbie developer wanting to learn a bit about Virtualization (from the IT point of view, not theoretical/academic). What I'd like to do: Prepare a machine Install VMWare or VirtualBox Prepare 3 Guest OSes (one for Win2k8 server, 2 for Ubuntu Server) Win2k8 will run SQL Server 2k8 and IIS (for ASP.NET MVC deployment) 1 Ubuntu Server for Drupal, SugarCRM, MediaWiki, typical LAMP stuff 1 Ubuntu Server for Java (Tomcat/Jetty + MySQL/PostgreSQL) What I'd like to know: What would be the ideal Host OS such that the Host OS should not spend too many resources on itself but should boost these instances of VMs (e.g: does Win2k8 performs better vs Linux?) What would be the ideal machine for this (preferably AMD base chip) I'm not expecting the best performance out of this setup, just a decent one to host one drupal instance, one ASP.NET MVC (future, not now), and one Tomcat/Jetty instance. NB: If you have a better suggestions on the setup, feel free to let me know (e.g: maybe Drupal and Tomcat can be in one instance but move the database to another instance instead of 1 instance map to 1 webserver and 1 dbserver). Thank you.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • ssh agent forwarding with many identities

    - by Eddified
    I have setup ssh agent forwarding, and I know it works. I also have many keys setup in the agent. The problem is there are so many keys that I get: Received disconnect from XXX.XXX.XXX.XXX: 2: Too many authentication failures for bob The way around this is to use IdentitiesOnly=yes so that ssh will only send the identity you want it to for the specified host. I've also gotten this implemented and I know it works, without agent forwarding. Now, I'm trying to combine the two features. That is, I want to use agent forwarding, but also be able to specify which identity to use when connecting. Problem is, I can't figure out how to do this. So, I want to connect from box A through box B to box C. Box A has all of the identity files and the ssh agent running. I want to edit box A or B's ssh config file(s) to use a specific identity that exists in box A's agent (which is being forwarded).

    Read the article

  • Notes for a NetBeans IDE 7.4 HTML5 Screencast

    - by Geertjan
    I'm making a screencast that intends to thoroughly introduce NetBeans IDE 7.4 as a tool for HTML, JavaScript, and CSS developers. Here's the current outline, additions and other suggestions are welcome. Getting Started Downloading NetBeans IDE for HTML5 and PHP Examining the NetBeans installation directory, especially netbeans.conf Examining the NetBeans user directory Command line options for starting NetBeans IDE Exploring NetBeans IDE Menus and toolbars Versioning tools Options Window Go through whole Options window Change look and feels Adding themes Syntax coloring Code templates Plugin Manager and Plugin Portal Dark Look and Feel Themes Toggle line wrap Emmet HTML Tidy NetBeans Cheat Sheets Creating HTML5 projects From scratch From online template, e.g., Twitter Bootstrap From ZIP file From folder on disk From sample Editing Useful shortcuts Alt-Enter: see the current hints Alt-Shift-DOT/COMMA: expand selection (CTRL instead of Alt on Mac) Ctrl-Shift-Up/Down: copy up/down Alt-Shift-Up/Down: move up/down Alt-Insert: generate code (Lorum Ipsum) View menu | Show Non-printable Characters Source menu Show keyboard shortcut card Useful hints Surround with Tag Remove Surrounding Tag Useful code completion Link tag for CSS, show completion Script tag for JavaScript, show completion Create code templates in Options window Useful HTML Palette items Unordered List Link Useful code navigation Navigator Navigate menu Useful project settings Project-level deployment settings CSS Preprocessors (SASS/LESS) Cordova support Useful window management Dragging, minimizing, undocking Ctrl-Shift-Enter: distraction-free mode Alt-Shift Enter: maximization Debugging JavaScript debugger Deploying Embedded browser Responsive design Inspect in NetBeans mode Chrome browser with NetBeans plugin Android and iOS browsers Cordova makes native packages On device debugging On device styling Documentation PHP and HTML5 Learning Trail: https://netbeans.org/kb/trails/php.html Contributing Social Media: Twitter, Facebook, blogs Plugin Portal Planning to complete the above screencast this week, will continue editing this page as more useful features arise in my mind or hopefully in the comments in this blog entry!

    Read the article

  • Priorize /server-status front another .htaccess rewrite rules in apache2

    - by Wiliam
    It's possible to priorize /server-status front another rewrite rules? For example the next ones don't let the /server-status works: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-s RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !\. RewriteRule ^.*$ index.php [NC,L] I have to add this to the .htaccess to don't enter to the php project: RewriteRule ^server-status$ - [L] With mod_pagespeed of google, their statistics page works without adding them to the project .htaccess: /mod_pagespeed_statistics https://developers.google.com/speed/docs/mod_pagespeed/configuration My mod_status.conf: <IfModule mod_status.c> <Location /server-status> SetHandler server-status Order Deny,Allow Deny from all Allow from 127.0.0.1 # My allowed ips here </Location> </IfModule>

    Read the article

  • Perl throwing 403 errors!

    - by Jamie
    When I first installed Perl in my WAMP setup, it worked fine. Then, after installing ASP.net, it began throwing 403 errors. Here's my ASP.net config: Load asp.net module LoadModule aspdotnet_module "modules/mod_aspdotnet.so" Set asp.net extensions AddHandler asp.net asp asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo # Mount application AspNetMount /asp "c:/users/jam/sites/asp" # ASP directory alias Alias /asp "c:/users/jam/sites/asp" # Directory setup <Directory "c:/users/jam/sites/asp"> # Options Options Indexes FollowSymLinks Includes +ExecCGI # Permissions Order allow,deny Allow from all # Default pages DirectoryIndex index.aspx index.htm </Directory> # aspnet_client files AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows/Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" # Allow ASP.net scripts to be executed in the temp folder <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymLinks Order allow,deny Allow from all </Directory> Also, what are the code tags for this site?

    Read the article

  • Microsoft Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how, today, I might try to make the distinction when talking to someone: "ProjectA is a managed .NET C++ project" and "ProjectB is an unmanaged native C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • Guacamole on KVM

    - by Siem Hermans
    I currently run a deployment where I provide virtual machines to circa 25 users through ESXi over the VMWare WSX web portal. This works, no doubt about it, it is fast, stable and reliable enough for the end users. However I stumbled across the Guacamole project (Link: http://guac-dev.org/) and the KVM project (Link: http://www.linux-kvm.org/). I must say I am no genius when it comes to Linux but I am interested in replacing the ESXi and WSX combination with a Guacamole and KVM deployment. I have seen people across the internet use ESXi in combination with Guacamole (mostly prior to the release of WSX), but I have yet to see someone use it in conjunction with KVM. Considering my amount of knowledge on Linux in general I would like to ask: Is it possible at all to combine the two?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Rails deployment questions

    - by Meltemi
    Getting ready to deploy a rails project on Mac OS X Server (10.5 - Leopard). Got a few questions for someone with Rails experience: where should directory containing the project go? inside the website's root folder or out? who should "own" that directory? www? root? something/someone else? hope to continue serving static pages via Apache... would like rails app to be served by mydomain/xxx/railsapp. is there a standard name people us for 'xxx'? not expecting too much traffic to begin with...just like to keep things as simple as possible.

    Read the article

  • Run a batch file silently, executed at remote desktop login

    - by ILMV
    In our office we are using Linux thin client machines, they work very well except the lack of IE, which is a pain because the corporations we deal with are too stupid to update their web apps (no flame wars please). To solve this problem we have machine in our computer room which users remote desktop into to access internet explorer, this is achieved by running a batch script which opens IE and when it closes logs them off, this setup works well for us. Even though I have @echo off and the cmd window isn't displaying anything, I would really like that batch file to be executed silently, so the cmd window doesn't appear at all. Is this possible? The Ubuntu terminal server client has an option to launch a file / app at login, is there a command I can use to run this batch silently. I have tried these: C:\my_batch.bat /NOCONSOLE C:\my_batch.bat /NOWINDOW C:\my_batch.bat /B C:\my_batch.bat /Q ...with no success, perhaps it's the way I am doing it? Cheers :-) Edit The remote desktop platform is a Windows XP machine, nothing entirely special but not a Windows Server setup.

    Read the article

  • Getting into driver development for linux [closed]

    - by user1103966
    Right now, I've been learning about writing device-drivers for linux 3.2 kernel for about 2 months. So far I have been able program simple char drivers that only read and write to a fictitious dev structure like a file, but now I'm moving to more advance concepts. The new material I've learned about includes I/O port manipulation, memory management, and interrupts. I feel that I have a basic understanding of overall driver operation but, there is still so much that I don't know. My question is this, given that I have the basic theory of how to write a dev-driver for a piece of hardware ... how long would it take to actually develop the skills of writing actual software that companies would want to employ? I plan on getting involved in an open-source project and building a portfolio. Also what type of beginner drivers could I write for hardware that would best help me develop my skills? I was thinking that taking on a project where I design my own key logger would easy and a good assignment to help me understand how IO ports and interrupts are used. I may want to eventually specialize in writing software for video cards or network devices though these devices seem beyond my understanding at the moment. Thanks for any help

    Read the article

  • Can't complete dropbox installation from behind proxy

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Constructor vs setter validations

    - by Jimmy
    I have the following class : public class Project { private int id; private String name; public Project(int id, String name, Date creationDate, int fps, List<String> frames) { if(name == null ){ throw new NullPointerException("Name can't be null"); } if(id == 0 ){ throw new IllegalArgumentException("id can't be zero"); } this.name = name; this.id = id; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } I have three questions: Do I use the class setters instead of setting the fields directly. One of the reason that I set it directly, is that in the code the setters are not final and they could be overridden. If the right way is to set it directly and I want to make sure that the name filed is not null always. Should I provide two checks, one in the constructor and one in the setter. I read in effective java that I should use NullPointerException for null parameters. Should I use IllegalArgumentException for other checks, like id in the example.

    Read the article

  • Scrum with Team Foundation Server 2010 Done

    - by Martin Hinshelwood
    Since I have joined SSW as a Solution Architect its Chief Architect, Adam Cogan, has been mentoring me and pushing me to do better. One of the things that I have been wanting to do since the first DDD Scotland was to present a session. For DDD Scotland 2010 Adam suggested that I submit he double session on “Better project Management with Team Foundation Server 2010”. So, with some apprehension I submitted two session as Part A and Part B. Download DDD Scotland -  Scrum with Team Foundation Server 2010 How surprised was I that after the attendees had finished casting their votes that both sessions would be in the top 20 one in the top 5. I an effort to promote diversity in sessions the DDD committee try to make sure that each presenter only have one session. I would have to compress SSW’s presentation into 1 hour. Around this time SSW embarked on it continuing adventures with scrum an Microsoft started heavily investing in Scrum for its internal use. I decided to do a slightly different session, but one that would still meet the agenda and goal of the billed session to provide “Better project management with Team Foundation Server 2010”. And so Scrum with Team Foundation Server 2010 was born. At this stage I really have to thank Aaron Bjork who provided me with many of the slides and animations as I really can’t work Power Point. On the 27th of April I presented the session for the Aberdeen Partner Group and then on 8th May I presented at DDD Scotland. Figure: Some of the presenters and organisers of DDD Scotland I mentioned quite a few of SSW’s Rules to better Scrum Using TFS and I have uploaded my presentation to Skydrive.   Download DDD Scotland -  Scrum with Team Foundation Server 2010 Technorati Tags: DDD Scot,Scrum,TFS 2010,SSW

    Read the article

  • Suggestions for a CMS markup language for PHP

    - by Yanick Rochon
    As a learning experience, and as project, I am attempting to write a CMS module for ZF2. One of the functionality I would like to have is the possibility of adding dynamic contents in the pages by calling PHP functions in the view scripts. However, I do not want to give the users freedom in writing PHP code directly inside the page content, but rather implement custom view helpers (or widgets) to handle logic. For example: calling partial, partialLoop, url, etc. specifying arguments and all. I liked the idea of extending Markdown but this would get complicated when trying to add custom CSS class to elements, etc. Then I had the idea of simply doing a preg_replace on some patterns. For example, the string : ### partialLoop:['partials/display.phtml',[{id:'p1',price:4.99},{id:'p2',price:12.34}]] ### would be replaced by <?php echo $this->partialLoop('partials/display.phtml', array(array('id'=>'p1','price'=>4.99),array('id'=>'p2','price'=>12.34))) ?> Obviously, there would be some caching done so the page content is not rendered everytime. Does this sound good? If not, what would be a good way of doing this? Or is there a project already being developed for doing this? (I'd like to avoid heavy third party libs and something fairly or fully compatible with ZF2 would be nice.) Thanks.

    Read the article

  • Free IP address management software

    - by TiFFolk
    We are choosing a system for managing our IP address space. So we are looking for a special free software like IPPlan. So what we have nowadays: Ipplan (Beta IPv6 support) SolarWinds IP address tracker (IPv6 support unknown ) IP module of The NOC Project (BTW, take a look of it, seems to be very promising project) (IPv6 support unknown ) phpIP (Does not support IPv6) IP management from RackTables (Does not support IPv6) Do you know about any other special software, like written above? But: No Wiki No DNS No DHCP No spreadsheet Software should provide: Clear view of available addresses Detail listing of all addresses by subnets/search pattern/owners/additional info It must support adding additional info like owner of IP, domain-name, contacts, etc Multi user support Easy interface Software has to be specially written for address management. Scalability Any OS: win, lin, sol, web

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • What are the packages/libraries I should install before compiling Python from source?

    - by Lennart Regebro
    Once in a while I need to install a new Ubuntu (I used it both for desktop and servers) and I always forget a couple of libraries I should have installed before compiling, meaning I have to recompile, and it's getting annoying. So now I want to make a complete list of all library packages to install before compiling Python (and preferably how optional they are). This is the list I compiled with below help and by digging in setup.py. It is complete for Ubuntu 10.04 and 11.04 at least: build-essential (obviously) libz-dev (also pretty common and essential) libreadline-dev (or the Python prompt is crap) libncursesw5-dev libssl-dev libgdbm-dev libsqlite3-dev libbz2-dev More optional: tk-dev libdb-dev Ubuntu has no packages for v1.8.5 of the Berkeley database, nor (for obvious reasons) the Sun audio hardware, so the bsddb185 and sunaudiodev modules will still not be built on Ubuntu, but all other modules are built with the above packages installed. Python 2.5 and Python 2.6 also needs to have LDFLAGS set on Ubuntu 11.04 and later, to handle the new multi-arch layout: export LDFLAGS="-L/usr/lib/$(dpkg-architecture -qDEB_HOST_MULTIARCH)" For Python 2.6 and 2.7 you also need to explicitly enable SSL after running the ./configure script and before running make. In Modules/Setup there are lines like this: #SSL=/usr/local/ssl #_ssl _ssl.c \ # -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ # -L$(SSL)/lib -lssl -lcrypto Uncomment these lines and change the SSL variable to /usr: SSL=/usr _ssl _ssl.c \ -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -lssl -lcrypto Python 2.6 also needs Modules/_ssl.c modified to be used with OpenSSL 1.0, which is used in Ubuntu 11.10. At around line 300 you'll find this: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ Change that into: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ #ifndef OPENSSL_NO_SSL2 else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ #endif else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ This disables SSL_v2 support, which apparently is gone in OpenSSL1.0.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >