Search Results

Search found 30217 results on 1209 pages for 'database'.

Page 699/1209 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • Examples of "Lost art" on software technology/development

    - by mamcx
    With the advent of a new technology, some old ideas - despite been good - are forgotten in the process. I read a lot how some "new" thing was already present in Lisp like 60 years ago, but only recently resurface with other name or on top of another language. Now look like the new old thing is build functional, non-mutable, non-locking-thread stuff... and that make me wonder what have been "lost" in the art of development of software? What ideas are almost forgotten, waiting for resurface? One of my, I remember when I code in foxpro. The idea of have a full stack to develop database apps without impedance mismatch is something I truly miss. In the current languages, I never find another environment that match how easy was develop in fox back them. What else is missing?

    Read the article

  • Why can't I ping server? VMware set to 'Bridged' loses IP address

    - by Dave
    I have installed a fresh 10.04 server onto a laptop on a home network as a VMware machine and set network connection to 'Bridged: connect directly to the physical network' from within VMware and rebooted the server. It then loses its IP address. dhclient eth0 says "No working leases in persistent database - sleeping" DHCP is working fine on the wi-fi router. The laptop is wired to a wireless router and from there wirelessly to a desktop. Desktop and Laptop can ping each other from Windows. I can ping the VM from Windows on the same laptop, but not from the desktop. Strangely ping has started to resolve hostnames to IPv6 addresses and not IPv4. Don't know whether that's connected? A kick in the right direction would be greatly appreciated. I've been an Ubuntu desktkop user for a few years, but new to ubuntu servers.

    Read the article

  • Can't get into the admin console after migrating to new server

    - by Emerson
    I migrated my WordPress blog to a new server, and everything seemed to be working fine until it started giving me the error when entering the admin area: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 4864 bytes) in /home/neworder/public_html/blog/wp-admin/includes/plugin.php on line 729 The line 729 has: $protected = array( '_wp_attached_file', '_wp_attachment_metadata', '_wp_old_slug', '_wp_page_template' ); I had installed the maintenance-mode, and I have suspicions that this is what broke the forum. If I remove the plugin it then gives another error: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 19456 bytes) in /home/neworder/public_html/blog/wp-admin/includes/post.php on line 1158 And that line has: $content .= '<p class="hide-if-no-js">' . esc_html__( 'Remove featured image' ) . '</p>'; } I tried to restore the blog file-system from the old server and also to restore the database from the old server (2x), but still it gives me the same error. The blog itself seems to be working fine: http://blog.antinovaordemmundial.com/

    Read the article

  • How to shift development culture from tech fetish to focusing on simplicity and getting things done?

    - by Serge
    Looking for ways to switch team/individual culture from chasing latest fads, patterns, and all kinds of best practices to focusing on finding quickest and simplest solutions and shipping features. My definition of "tech fetish": Chasing latest fads, applying new technologies and best practices without considering product/project impact, focusing on micro optimization, creating platforms and frameworks instead of finding simple and quick ways to ship product features. Few examples of culture differences: From "Spent a day on trying to map database query with five complex joins in NHibernate" to "Wrote a SQL query and used DataReader to pull data in" From "Wrote super-fast JSON parser in C++" to "Used Python to parse JSON response and call C++ code" From "Let's use WCF because it supports all possible communication standards" to "REST is simple text-based format, let's stick with it and use simple HTTP handlers"

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

  • How to improve programming skills?

    - by Mike
    I'm very new to programming. I started learning PHP about half a year ago, so I do know something. I can write small functions, I can export and import information from a database and I can make a website. I don't know OOP principles and I don't know about objects and classes. Should I move to OOP principles and learn about classes, methods and objects? If not, what should I do? Continue writing simple code? How can a programmer write his/her own API? Is OOP necessary to do this? So how can i improve my skills? I love programming. I spend my 24/7 on it, so any help will be appreciated.

    Read the article

  • How can I convince management to deal with technical debt?

    - by Desolate Planet
    This is a question that I often ask myself when working with developers. I've worked at four companies so far and I've become aware of a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than use something like MySQL and that created hell for the team when refactoring or extending the application. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving. The company then becomes a revolving door with developers coming in and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt?

    Read the article

  • Checking for DBNull

    - by Jim Lahman
    Using a table adapter to a SQL Server database table that returns a NULL record.  We determine the fields are NULL by comparing against System.DBNull Looking the NULL records in SQL Management studio   Using a table adapter to retrieve a record   1: try 2: { 3: this.vTrackingTableAdapter.FillByTrkZone(this.dsL1Write.vTracking, iTrkZone); 4: } 5: catch (Exception ex) 6: { 7: sLogMessage = String 8: .Format("Error getting coil number from tracking table at {0} - {1}", 9: sTrkName, 10: ex.Message); 11: throw new CannotReadTrackingTableException(sLogMessage); 12: }   Looking at the record as it returned from the table adapter:   ItemArrayObject Column [0] ChargeCoilNumber [1] HeadWeldZone [2] TailWeldZone [3] ZoneLen [4] ZoneCoilLen [5] Confirmed [6] Validated [7] EntryWidth [8] EntryThickness   Since each item in the ItemArray is an object, we can test for null   1: if (dsL1Write.vTracking.Rows[0].ItemArray[0] == System.DBNull.Value) 2: { 3: throw new NoCoilAtPORException("NULL coil found at tracking zone " + sTrkName); 4: }   If no records were returned by the table adapter 1: if (dsL1Write.vTracking.Rows.Count == 0) 2: { 3: throw new NoCoilAtPORException("No coils found at tracking zone " + sTrkName); 4: }

    Read the article

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • AppFabric &ndash; where are all the monitoring events?

    - by Shawn Cicoria
    When you’ve just gone through a setup of AppFabric and you’ve got some WF/WCF things happening, if you start looking at the Dashboard and you see nothing, it might be as simple as restarting SQL Agent. I generally don’t reboot my system for several days and after installing AppFabric the SQL Agent jobs didn’t start firing right away.  Yes, even running a boot to VHD, you can still put the machine to sleep (just logoff and click on Sleep)… So, after spending time looking through the SQL monitoring DB that AppFabric was configured to use, I saw a bunch of records in the [AppFabric_Monitoring].[dbo].[ASStagingTable] table.  This table is the stopping point before the SQL Agent job (or Service Broker in SQL Express) pushes the items to their final resting place. This post goes through a few things to check on AppFabric monitoring http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx Of course, during development you might want to clean up regularly For that there’s the PowerShell command Clear-AsMonitoringSqlDatabase -Database AppFabric_Monitoring

    Read the article

  • Build an ASP.NET 3.5 Guestbook using MS SQL Server and VB.NET

    One of the most important website features is a guest book. This is particularly useful if you need to know the responses and reactions of your website s visitors. With the release of ASP.NET 3.5 and Visual Web Developer Express 2 8 several web controls make it possible to create an ASP.NET application without the need to hard manually code everything including database scripts server side scripts etc. You can see how that would be helpful to writing a guest book. This is the first part of a multi-part series.... SW Deployment Automation Best Practices Free Guide for IT Leaders: Overcoming Software Distribution & Mgmt Challenges.

    Read the article

  • ffmpeg: cut multiple input files with seeking to one output file

    - by Josef Kufner
    I have list of video files (loaded from database), each with start and end time of requested interval: # file begin end v1.mp4 1:01 2:01 v2.mp4 3:02 3:32 v3.mp4 2:03 5:23 And I need to create single video file containing these intervals: [0:00]---v1---[2:00]---v2---[2:30]---v3---[5:50] I preffer usig ffmpeg, since it is installed on server. Caller program is written in PHP. It is easy to cut one input to one output (argument escaping removed for clarity): exec("ffmpeg -ss $begin -i $input_file -ss $begin -c copy $output_file"); I there any easier way than executing ffmpeg for each interval and then execute it once more to concatenate prepared clips together? I really do not like to have a lot of temporary files or dealing with complex process handling.

    Read the article

  • Top 10 Transact-SQL Statements a SQL Server DBA Should Know

    Microsoft SQL Server is a feature rich database management system product, with an enormous number of T-SQL commands. With each feature supporting its own list of commands, it can be difficult to remember them all. MAK shares his top 10 T-SQL statements that a DBA should know. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Oracle University Nuovi corsi (Week 42)

    - by rituchhibber
    Oracle University ha recentemente rilasciato i seguenti nuovi corsi in inglese: Database Oracle Enterprise Manager Cloud Control 12c: Install & Upgrade (Training On Demand) MySQL Performance Tuning (Training On Demand) Fusion Middleware Oracle GoldenGate 11g Fundamentals for Oracle (4 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle WebCenter Portal 11g: Build Portals with Spaces (3 days) Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards (4 days) SOA & BPM SOA Adoption and Architecture Fundamentals (3 Days) eBusiness Suite R12 Oracle Using and Maintaining Approvals Management - Self-Study Course R12 Oracle HRMS Advanced Benefits Fundamentals - Self-Study Course WebLogic Oracle WebLogic Server 11g: Monitor and Tune Performance (Training On Demand) Financial Oracle Project Financial Planning 11.1.2: Create Projects ( 3 days) Tuxedo Oracle Tuxedo 12c: Application Administration (5 days) Java Java SE 7: The Platform Evolves - Self-Study Course Primevera Primavera Client/Server Partner Trainer Course - Self-Study Course Primavera Progress Reporter 8.2 - Self-Study Course Per ulteriori informazioni e per conoscere le date dei corsi, contattate il vostro Oracle University team locale.

    Read the article

  • Methods of ordering function definitions in code

    - by xralf
    When I work on some programming project (usually command line application in Python with many switches), I'm usually creating about 30 and more functions. Most of the functions are in one file (except some helpers that I utilize in more projects). Some of the functions are called on particular switch (like -p or --print) but many functions do some helper computations, print operations or database operations because I don't want to main functions be too large. When I have an idea for a new functionality I often put new functions randomly to the file. Should I think more about it and place it to some particular place? Are there some methods for this?

    Read the article

  • Caption Competition 9: Carry on Captioning

    - by Simple-Talk Editorial Team
    This picture below – the one with the rabbits, yes – is clearly something to do with databases. But what? Tell us in the comments – the best / funniest entry wins a $50 Amazon gift card.  Some suggestions to help turn on the comedy tap: The world’s first self-replicating cryptocurrency was hit by hyperinflation almost immediately. Early punchcard computers were ineffective but adorable. Elmer Fud teams up with Wile E Coyote to create the ultimate drop database. You can beat that. A child could beat that. Prove it in the comments below.

    Read the article

  • Hello NHibernate! Quickstart with NHibernate (Part 1)

    - by BobPalmer
    When I first learned NHibernate, I could best describe the experience as less of a learning curve and more like a learning cliff.  A large part of that was the availability of tutorials.  In this first of a series of articles, I will be taking a crack at providing people new to NHibernate the information they need to quickly ramp up with NHibernate. For the first article, I've decided to address the gap of just giving folks enough code to get started.  No UI, no fluff - just enough to connect to a database and do some basic CRUD operations.  In future articles, I will discuss a repository pattern for NHibernate, parent-child relationships, and other more advanced topics. You can find the entire article via this Google Docs link: http://docs.google.com/Doc?docid=0AUP-rKyyUMKhZGczejdxeHZfOGMydHNqdGc0&hl=en Enjoy! -Bob

    Read the article

  • Possible automated Bing Ads fraud?

    - by Gary Joynes
    I run a website that generates life insurance leads. The site is very simple a) there is a form for capturing the user's details, life insurance requirements etc b) A quote comparison feature We drive traffic to our site using conventional Google Adwords and Bing Ads campaigns. Since the 6th January we have received 30-40 dodgy leads which have the following in common: All created between 2 and 8 AM Phone number always in the format "123 1234 1234' Name, Date Of Birth, Policy details, Address all seem valid and are unique across the leads Email addresses from "disposable" email accounts including dodgit.com, mailinator.com, trashymail.com, pookmail.com Some leads come from the customer form, some via the quote comparison feature All come from different IP addresses We get the keyword information passed through from the URLs All look to be coming from Bing Ads All come from Internet Explorer v7 and v8 The consistency of the data and the random IP addresses seem to suggest an automated approach but I'm not sure of the intent. We can handle identifying these leads within our database but is there anyway of stopping this at the Ad level i.e. before the click through.

    Read the article

  • Can't install gimp-plugin-registry

    - by Uri Herrera
    I tried to install the Ubuntu Studio Graphics meta-package, however it didn't install correctly.The package gimp-plugin-registry just won`t install, i tried the one in the Software center and the one on the WebUp8 PPA neither package works. The following NEW packages will be installed: gimp-plugin-registry 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1395kB of archives. After this operation, 3592kB of additional disk space will be used. (Reading database ... 402557 files and directories currently installed.) Unpacking gimp-plugin-registry (from .../gimp-plugin-registry_3.2-1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/gimp-plugin-registry_3.2-1_i386.deb (- -unpack): trying to overwrite '/usr/lib/gimp/2.0/plug-ins/file-xmc', which is also in package gimp 2.7.3-2010110501~mm dpkg-deb: subprocess paste killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gimp-plugin-registry_3.2-1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How to profile LINQ to Entities queries in your asp.net applications - part 3

    - by nikolaosk
    In this post I will continue exploring ways on how to profile database activity when using the Entity Framework as the data access layer in our applications. If you want to read the first post of the series click here . If you want to read the second post of the series click here . In this post I will use the excellent (best tool for EF profiling) which is called Entity Framework Profiler. You can download the trial - fully functional edition of this tool from here . I will use the previous example...(read more)

    Read the article

  • Video Did Not Kill the Podcast Star

    - by Justin Kestelyn
    Who says video killed the podcast star? We're seeing more favorites out there than ever before. For example, the OTN team is proud to be supporters of the Java Spotlight Podcasts, straight from the official Java Evangelist Team at Oracle (lots of great insider info); the OurSQL: The MySQL Database Podcasts, produced by MySQL maven (and Oracle ACE Director) Sheeri Cabral; and The GlassFish Podcast, always a reliable source. And we'd add The Java Posse and The Basement Coders to our personal playlist. And although we're on a video kick ourselves at the moment, you can still get the audio of our TechCast Live shows, if you think we have "faces for radio."

    Read the article

  • Managing Custom Series

    - by user702295
    Custom series that have been added should be done with client Defined Prefix, ex. ACME Final Forecast, so they are can be identified as non-standard series.  With that said, it is not always done, so beginning in v7.3.0 there is a new column called Application_Id in the Computed_Fields table.  This is the table that stores the Series information.  Standard Series will have have a prefix similar to COMPUTED_FIELD, while a custom series will have an Application_Id value similar to 9041128B99FC454DB8E8A289E5E8F0C5. So a SQL that will return the list of custom series in your database might look something like this: select computed_title Series_Name, application_id from computed_fields where application_id not like '%COMPUTED_FIELD%' order by 1;

    Read the article

  • Managing Custom Series

    - by user702295
    Custom series that have been added should be done with client Defined Prefix, ex. ACME Final Forecast, so they are can be identified as non-standard series.  With that said, it is not always done, so beginning in v7.3.0 there is a new column called Application_Id in the Computed_Fields table.  This is the table that stores the Series information.  Standard Series will have have a prefix similar to COMPUTED_FIELD, while a custom series will have an Application_Id value similar to 9041128B99FC454DB8E8A289E5E8F0C5. So a SQL that will return the list of custom series in your database might look something like this: select computed_title Series_Name, application_id from computed_fields where application_id not like '%COMPUTED_FIELD%' order by 1;

    Read the article

  • How to keep a generic process unique?

    - by Steve Van Opstal
    I'm currently working on a project that makes connection between different banks which send us information on which that project replies. A part of that project configures the different protocols that are used (not every bank uses the same protocol), this runs on a separate server. These processes all have unique id's which are stored in a database. But to save time and money on configurations and new processes, we want to make a generic protocol that banks can use. Because of PCI requirements we have to make a separate process for every bank we connect to. But the generic process has only 1 unique identifier and therefor we cannot keep them apart. Giving every copy of that process a different identifier is as I see it impossible because they run entirely separate. So how do I keep my generic process unique?

    Read the article

  • jQuery Templates, Data Link

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Query Templates, Data Link, and Globalization I am sure you must have read Scott Guthrie’s blog post about jQuery support and officially supporting jQuery's templating, data linking and globalization, if not here it is: jQuery Templating Since we are an open source shop and use jQuery and jQuery plugins extensively to say the least, decided to look into the templating a bit and see what data linking is all about. For those not familiar with those terms here is the summary, plenty of material out there on what it is, but here is what in my experience it means: jQuery Templating: A templating engine that allows you to specify a client-side template where you indicate which properties/tags you want dynamically updated. You in a sense specify which parts of the html is dynamic and since it is pluggable you are able to use tools data jQuery data linking and others to let it sync up your template with data. What makes it more powerful is that you can easily work with rows of data, adding and removing rows. Once the template has been generated, which you do dynamically on a client-side event, you then append/inject the resulting template somewhere in your DOM, like for example you would get a JSON object from the database, map it to your template, it populates the template with your data in the indicated places, and then let’s say for example append it to a row in a table. I have not found it that useful for lets say a single record of data since you could easily just get a partial view from the server via an html type ajax call. It really shines when you dynamically add/remove rows from a list in the DOM. I have not found an alternative that meets the functionality of the jQuery template and helps of course that Microsoft officially supports it. In future versions of the jQuery plug-in it may even ship as part of the standard jQuery library and with future versions of Visual Studio. jQuery Data Linking: In short I was fascinated by it initially by how with one line of code I can sync up my JSON object with my form elements. That's where my enthusiasm stopped. It was one-line to let is deal with syncing up your form with your JSON object, but it is not bidirectional as they state and I tried all the work arounds they suggested and none of them work. The problem is that when you update your JSON object it DOES NOT sync it up with your form. In an example, accounts are being edited client side by selecting the account from a list by clicking on the row, it then fetches the entire account JSON object via ajax json-type call and then refreshes the form with the account’s details from the new JSON object. What is the use of syncing up my JSON with the form if I still have to programmatically sync up my new JSON object with each DOM property?! So you may ask: “what is the alternative”? Good question and the same one I was pondering, maybe I can just use it for keeping my from n sync with my JSON object so I can post that JSON object back to the server and update my database. That’s when I discovered Knockout: Knockout It addresses the issues mentioned above and also supports event handling through the observer pattern. Not wanting to go into detail here, Steve Sanderson, the creator of Knockout, has already done a terrific job of that, thanks Steve for a great plug-in! Best of all it integrates perfectly with the jQuery Templating engine as well. I have not found an alternative to this plugin that supports the depth and width of functionality and would recommend it to anyone. The only drawback is the embedded html attributes (data-bind=””) tags that you have to add to the HTML, in my opinion tying your behavior to your HTML, where I like to separate behavior from HTML as well as CSS, so the HTML is purely to define content, not styling or behavior. But there are plusses to this as well and also a nifty work around to this that I will just shortly mention here with an example. Instead of data binding an html tag with knockout event handling like so:  <%=Html.TextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   Do: <%=Html.DataBoundTextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   The html extension above then takes care of the internals and you could then swap Knockout for something else if you want to inside the extension and keep the HTML plugin agnostic. Here is what the extension looks like, you can easily build a whole library to support all kinds of data binding options from this:      public static class HtmlExtensions       {         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }       }   Hope this helps in making a decision when and where to consider jQuery templating, data linking and Knockout.

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >