Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 581/1252 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • EE vs Computer Science: Effect on Developers' Approaches, Styles?

    - by DarenW
    Are there any systematic differences between software developers (sw engineers, architect, whatever job title) with an electronics or other engineering background, compared to those who entered the profession through computer science? By electronics background, I mean an EE degree, or a self-taught electronics tinkerer, other types of engineers and experimental physicists. I'm wondering if coming into the software-making professions from a strong knowledge of flip flops, tristate buffers, clock edge rise times and so forth, usually leads to a distinct approach to problems, mindsets, or superior skills at certain specialties and lack of skills at others, when compared to the computer science types who are full of concepts like abstract data types, object orientation, database normalization, who speak of "closures" in programming languages - things that make little sense to the soldering iron crowd until they learn enough programming. The real world, I'm sure, offers a wild range of individual exceptions, but for the most part, can you say there are overall differences? Would these have hiring implications e.g. (to make up something) "never hire an electron wrangler to do database design"? Could knowing about any differences help job seekers find something appropriate more effectively? Or provide enlightenment or some practical advice for those who find themselves misfits in a particular job role? (Btw, I've never taken any computer science classes; my impression of exactly what they cover is fuzzy. I'm an electronics/physics/art type, myself.)

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • Software Center Freezing on Xubuntu 12.10

    - by AC3
    Whenever I open Software center I get this error: 012-12-12 16:19:29,196 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/dbus/proxies.py', 410, '_introspect_error_handler')' 2012-12-12 16:19:29,196 - dbus.proxies - ERROR - Introspect error on :1.74:/com/ubuntu/Softwarecenter: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. 2012-12-12 16:19:54,713 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-12-12 16:19:54,816 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-12-12 16:19:55,705 - softwarecenter.region - WARNING - failed to use geoclue: 'org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Geoclue.Master was not provided by any .service files' 2012-12-12 16:19:56,575 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-12-12 16:19:56,592 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-12-12 16:19:56,592 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-12-12 16:19:56,910 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-12-12 16:19:56,935 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Not sure if it is a bug or not, have uninstalled and reinstalled the program already with synaptic. Very little experience with linux and any help will be appreciated

    Read the article

  • Today in the OTN Lounge (Wednesday October 3, 2012)

    - by Bob Rhubart
    Here's a quick rundown of today's activities in the OTN Lounge: OTN Lounge hours today: 8:00 am - 6:00pm 9:00 am - 1:00 pm RAC Attack Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) 4:30 pm - 8:00 pm Oracle Social Network Developer Challenge Judging The Oracle Social Network Developer Challenge comes to its conclusion with the final judging on entries and the award of the single prize: $500 in Amazon gift cards. Click here for more information. 4:30 pm - 5:30 pm Oracle ADF / Oracle Fusion Middleware Meet-up Join other Oracle ADF and Oracle Fusion Middleware developers and meet the product managers and engineers behind Oracle ADF, ADF Mobile, and ADF Essentials. Did we mention free beer? The OTN Lounge is located in the Howard St. Tent, between 3rd and 4th, directly between Moscone North and Moscone South. Access to the OTN Lounge requires an Oracle OpenWorld or JavaOne conference badge.

    Read the article

  • Problem in my terminal related clamav

    - by Hejar Hejar
    Recently I decided to use Avast Antivirus. I uninstalled clamav and installed Avast. Now whenever I use my terminal I get the following errors.: 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,250 kB of archives. After this operation, 5,120 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `clamav-daemon' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-pyclamav' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-base' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-freshclam' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `libclamav6' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-clamav' missing, assuming package has no files currently installed. (Reading database ... 554940 files and directories currently installed.) Please help. Thank you

    Read the article

  • Synchronizing 3 servers over IP

    - by user93078
    I'm setting up a medical server for a hospital that has doctors located in 3 different locations, meaning there would be 3 servers (1 in each location). All 3 servers would just have the following software: Ubuntu Server 12.04 minimal MySQL, PHP 5, Apache The medical software which would read/write to the MySQL database Remote admin apps like Nagios & Webmin Rsync for backup (rsync-over-ssh) as a cron job and the doctors at each location would access patient & billing data from their respective servers. What I'd like is, that each of these servers all have synchronized info (especially the mySQL database's) - let's say on an hourly basis each of these servers synchronize data to a common remote server and the data is then brought down to each of the servers. I know an easier way would be to have the medical app running on a remote web server, but since this is medical that we're talking about and knowing how common it is in our area for the net to go gown, I wouldn't like a web based scenatio. Is such a setup possible? Would this be the right way to do things or is there a better way to this? Would really appreciate views and comments (or how to set this up) on this.

    Read the article

  • I need help with a timer for a text based game, i need to include a mysql query to it, but not sure how.

    - by Hijumper
    i would like to add a mysql query somewhere in my timer code so that everytime it restarts then 1 item would be added to the database, i can get it to show how many items you have gotten since the timer has been running, but im not quite sure how to add it into a mysql database, any help would be appreciated :D heres my timer code thus far: <head> <script type="text/javascript"> var c=10; var mineCount = 0; var t; var timer_is_on=0; function timedCount() { document.getElementById('txt').value = c; c = c - 1; if (c <= -1) { mineCount++; var _message = "You have mined " + mineCount + " iron ore" + (((mineCount > 1) ? "s" : "") + "!"); document.getElementById('message').innerHTML = _message; startover(); } } function startover() { c = 10; clearTimeout(t); timer_is_on=0; doMining(); } function doMining() { if (!timer_is_on) { timer_is_on = true; t = setInterval(function () { timedCount(); }, 1000); } } </script> <SPAN STYLE="float:left"> <form> <input type="button" value="Mining" onClick="doMining()"> <input type="text" id="txt"> </form> </SPAN> <html> <center> <div id='message'></div>

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • New Marketing Kits Available

    - by Cinzia Mascanzoni
    New marketing kits are available on the OPN portal. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Optimized DataCenter Oracle Storage for Oracle Database and Engineered Systems StorageTek SL150 - New Scalable Storage Solutions for Growing Businesses Extreme Database Performance meets Its Backup and Recovery Match with Oracle's Sun ZFS Backup Appliance Maximize Value and Business Agility through Data Center Virtualization Be A Content King with Oracle WebCenter Content

    Read the article

  • Lots of goodies

    - by wcoekaer
    We just issued a press release with a number of very good updates for everyone There are a few things of importance : 1) As of right now, Oracle Linux 6 with the Unbreakable Kernel is certified with a number of Oracle products such as Oracle Database 11gR2 and Oracle Fusion Middleware. The certification pages in the Oracle Support portal will be updated with the latest certification status for the various products. As always we have gone through a long period of very comprehensive testing and validation to ensure that the whole stack works really well together, with very large database workloads, middleware application workloads etc. 2) Standard certification efforts for Oracle Linux 6 with the Red Hat Compatible Kernel are in progress and we expect that to be completed in the next few months. Because of the compatibility between OL6 and RHEL6 we can then also state certification for RHEL6. 3) Oracle Linux binaries (and of course source code) have been free for download -and- use (including production, not just trial periods) since day one. You can freely redistribute the binaries, unlike many other Linux vendors where you need to pay a support subscription to even get access to the binaries. We offered both the base distribution release DVDs (OL4, OL5, OL6) and the update releases, such as 5.1, 5.2 etc. this way. Today, in this announcement, we also started to make available the bugfix and security updates released in between these update releases. So the errata streams (both binary and source code) for OL4, 5 and 6 are now free for download and use from http://public-yum.oracle.com. This includes uek and uek2. The nice thing is, if you want a complete up to date system without support, use this, if you then need support, get a support subscription. Simple, convenient, effective. We have great SLA's in producing our update streams, consistency in release timing and testing of all the components. Have at it!

    Read the article

  • Quick Fix for GlassFish/MySQL NoPasswordCredential Found

    - by MarkH
    Just the other day, I stood up a GlassFish 3.1.2 server in preparation for a new web app we've developed. Since we're using MySQL as the back-end database, I configured it for MySQL (driver) and created the requisite JDBC resource and supporting connection pool. Pinging the finished pool returned a success, and all was well. Until we fired up the app, that is -- in this case, after a weekend. Funny how things seem to break when you leave them alone for a couple of days. :-) Strangely, the error indicated "No PasswordCredential found". Time to re-check that pool. All the usual properties and values were there (URL, driverClass, serverName, databaseName, portNumber, user, password) and were populated correctly. Yes, the password field, too. And it had pinged successfully. So why the problem? A bit of searching online produced enough relevant material to offer promise. I didn't take notes as I was investigating the cause (note to self), but here were the general steps I took to resolve the issue: First, per some guidance I had found, I tried resetting the password value to nothing (using () for a value). Of course, this didn't fix anything; the database account requires a password. And when I tried to put the value back, GlassFish politely refused. Hmm. I'd seen that some folks created a new pool to replace the "broken" one, and while that did work for them, it seemed to simply side-step the issue. So I deleted the password property - which GlassFish allowed me to do - and restarted the domain. Once I was back in, I re-added the password property and its value, saved it, and pinged...success! But now to the app for the litmus test. The web app worked, and everything and everyone was now happy. Not bad for a Monday.  :-D Hope this helps, Mark

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • How do I list all the relation variables and debug them interactively?

    - by mfisch
    I'm writing a charm that requires a mysql database, I found from looking at other charms that this (below) is how I get the info about the database: user=`relation-get user` password=`relation-get password` mysqlhost=`relation-get private-address` But I just found that from reading the wordpress charm example, is there a way to show all the relation variables that I can use? Also, while debugging my db-relation-changed script, I wanted to ssh into my host and interactively run those commands, for example relation-get user, but it didn't work. I resorted to having to restart everything and use juju log to print them out. This wasted a lot of time. Is there a way to print out these relations, either from my dev box or from the instance running my charm? (Below is what happens when I tried to interactively run relation-get): ubuntu@mfisch-local-tracks-0:~$ relation-get user usage: relation-get [-h] [-o OUTPUT] [-s SOCKET] [--client-id CLIENT_ID] [--format FORMAT] [--log-file FILE] [--log-level CRITICAL|DEBUG|INFO|ERROR|WARNING] [-r RELATION ID] [settings_name] [unit_name] No JUJU_AGENT_SOCKET/-s option found I tried juju debug-hooks tracks/0 -e local, that dropped me into a shell and relation-get still failed.

    Read the article

  • Test JPQL with NetBeans IDE 7.3 Tools

    - by Geertjan
    Since I pretty much messed up this part of the "Unlocking Java EE 6 Platform" demo, which I did together with PrimeFaces lead Çagatay Çivici during JavaOne 2012, I feel obliged to blog about it to clarify what should have happened! In my own defense, I only learned about this feature 15 minutes before the session started. In 7.3 Beta, it works for Java SE projects, while for Maven-based web projects, you need a post 7.3 Beta build, which is what I set up for my demo right before it started. Then I saw that the feature was there, without actually trying it out, which resulted in that part of the demo being a bit messy. And thanks to whoever it was in the audience who shouted out how to use it correctly! Screenshots below show everything related to this new feature, available from 7.3 onwards, which means you can try out your JPQL queries right within the IDE, without deploying the application (you only need to build it since the queries are run on the compiled classes): SQL view: Result view for the above: Here, you see the result of a more specific query, i.e., check that a record with a specific name value is present in the database: Also note that there is code completion within the editor part of the dialog above. I.e., as you press Ctrl-Space, you'll see context-sensitive suggestions for filling out the query. All this is pretty cool stuff! Saves time because now there's no need to deploy the app to check the database connection.

    Read the article

  • How to setup users for desktop app with SQL Azure as backend?

    - by Manuel
    I'm considering SQL Azure as DB for a new application I'm developing. The reason I want to go with Azure is because I don't want to have to maintain yet another database(s) and I want my users to be able to access the data from anywhere. The problem is that I'm not clear regarding how to users will connect. The application is a basic CRUD type of windows app. I've read that you need to add your IP to SQL Azure's firewall to connect to it, but I don't know if it's only for administration purposes only. Can anyone clarify if anyone (anywhere) can access the data with the proper credentials? Which of the following scenarios would work best (if at all)? A) Add each user to SQL Azure and have the app connect directly to Azure as if it was connecting to SQL Server B) Add an anonymous user SQL Azure and pass the real user's password/hash with every call so the Azure database can service the requests accordingly. C) Put a WCF service in between so that it handles the authentication stuff. The service will only serve the appropriate information to the user given his/her authentication and SQL Azure would be open to the service exclusively. D) - ideas are welcomed - This is confusing because all Azure examples I see are for websites. I have a hard time believing SQL Azure wouldn't handle the case of desktop apps connecting to it. So what's the best practice?

    Read the article

  • Query for server DefaultData & DefaultLog folders

    - by jamiet
    Do you ever need to query for the DefaultData & DefaultLog folders for your SQL Server instance? Well, I just did and the following script enabled me to do that: DECLARE @HkeyLocal NVARCHAR(18),@MSSqlServerRegPath NVARCHAR(31),@InstanceRegPath SYSNAME; SELECT @HkeyLocal=N'HKEY_LOCAL_MACHINE' SELECT @MSSqlServerRegPath=N'SOFTWARE\Microsoft\MSSQLServer' SELECT @InstanceRegPath=@MSSqlServerRegPath + N'\MSSQLServer' DECLARE @SmoDefaultFile NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @SmoDefaultFile OUTPUT DECLARE @SmoDefaultLog NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @SmoDefaultLog OUTPUT SELECT ISNULL(@SmoDefaultFile,N'') AS [DefaultFile],ISNULL(@SmoDefaultLog,N'') AS [DefaultLog]' I haven’t done any rigorous testing or anything like that, all I can say is…it worked for me (on SQL Server 2012). Use as you see fit. Doubtless this information exists in a multitude of other places but nevertheless I’m putting it here so I know where to find it in the future. Just for fun I thought I’d try this out against SQL Azure Windows Azure SQL Database. Unsurprisingly it didn’t work there: Msg 40515, Level 15, State 1, Line 16 Reference to database and/or server name in 'MASTER.dbo.xp_instance_regread' is not supported in this version of SQL Server. @Jamiet

    Read the article

  • Help with URL Rewrite

    - by bodesam
    This is the first time i'm doing this and have been doing some research on it. I have a page that selects some info from a database and displays it with a link to a second page that uses the result to query the database, something like this: $sel=mysql_query("select id, title from thetable "); while($row=mysql_fetch_array($sel)) { $id=$row['id']; $title=$row['title']; echo "<a href='more.php?id=$id'>$title</a>"; } The issue is, in the more.php page, instead of more.php?id=5 to show in the address bar, I want something like more/title Secondly, as it obtains in most sites, I want the link on the referring page to show this friendly url on mouse hover not the more.php?id=5 And I notice in most sites some words like 'a', 'and', 'the' etc are usually removed from the url title(even if there originally), moreover how does one handle the situation where more than one record have the same title. How does one go about achieving this url rewrite with htaccess or whatever method is used. Thanks.

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >