Search Results

Search found 54131 results on 2166 pages for 'database project'.

Page 137/2166 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • EmblaCom Oy Maximizes Database Availability and Reduces Costs with MySQL Cluster

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Headquartered in Finland, EmblaCom Oy provides turnkey and cloud-hosted voice solutions to mobile operators around the globe. Since launching the original mobile private branch exchange (PBX) in 1998, the company has focused on helping its partners provide efficient voice communications to their key business customers. The company’s voice solutions are used by millions of subscribers, worldwide. EmblaCom Oy needed to replace several database engines with a standardized, scalable, development-friendly database solution to maximize availability and cut costs. The company chose MySQL Cluster Carrier Grade Edition, which has maximized accessibility to EmblaCom’s services for its clients and their hundreds of thousands of subscribers. The initiative has also reduced, by half, the cost of the database solution installation for customers, as well as lowered maintenance and customer service costs. Read the entire case study here.

    Read the article

  • PROJECT HELP NEEDED. SOME BASIC CONCEPTS GREAT CONFUSION BECAUSE OF LACK OF PROPER MATERIAL PLEASE H

    - by user287745
    Task ATTENDENCE RECORDER AND MANAGEMENT SYSTEM IN DISTRIBUTED ENVIRONMENT ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Example implementation needed. a main server in each lab where the operator punches in the attendence of the student. =========================================================== scenerio:- a college, 10 departments, all departments have a computer lab with 60-100 computers, the computers within each lab are interconnected and all computers in any department have to dail to a particular number (THE NUMBER GIVEN BY THE COLLEGE INTERNET DEPARTMENT) to get connected to the internet. therefore safe to assume that there is a central location to which all the computers in the college are connected to. there is a 'students attendence portal' which can be accessed using internet explorer, students enter there id and get the particular attendence record regarding to the labs only. a description of the working is like:- 1) the user will select which department, which year has arrived to the lab 2) the selection will give the user a return of all the students name and there roll numbers belonging to that department; 'with a check box to "TICK MARK IF THE STUDENT IS PRESENT" ' 3) A SUBMIT BUTTON when pressed reads the 'id' of the checkbox to determine the "particular count number of the student" from that an id of the student is constructed and that id is inserted with a present. (there is also date and time and much more to normalize the db and to avoid conflicts and keep historic records etc but that you will have to assume) steps taken till this date:- ( please note we are not computer students, we are to select something of some other line as a project!, as you will read in my many post 'i" have designed small websites just out of liking. have never ever done any thing official to implement like this.) * have made the database fully normalized. * have made the website which does the functions required on the database. Testing :- deployed the db and site on a free aspspider server and it worked. tested from several computers. Now the problem please help thank youuuuuuuu a practical demonstration has to be done within the college network. no internet! we have been assigned a lab - 60 computers- to demonstrate. (please dont give replies as 60 computers only! is not a big deal one CPU can manage it. i know that; IT IS A HYPOTHETICAL SITUATION WHERE WE ASSUME THAT 60 IS NOT 60 BUT ITS LIKE 60,000 COMPUTERS) 1a) make a web server, yes iis and put files in www folder and configure server to run aspx files- although a link to a step by step guide will be appreciated)\ ? which version of windows should i ask for xp or win server 2000 something? 2a) make a database server. ( well yes install sql server 2005, okay but then what? just put the database file on a pc share it and append the connection string to the share? ) 3a) make the site accessible from the remaining computers ? http://localhost/sitename ? all users "being operators of the particular lab" have the right to edit, write or delete(in dispute), thereby any "users" who hate our program can make the database inconsistent by accessing te same record and doing different edits and then complaining? so how to prevent this? you know something like when the db table is being written to others can only read but not write.. one big confusion:- IN DISTRIBUTED ENVIRONMENT "how to implement this" where does "distributed environment" come in! meaning :- alright the labs are in different departments but the "database server will be one" the "web server will be one" so whats distributed!?

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Does this inheritance design belong in the database?

    - by Berryl
    === CLARIFICATION ==== The 'answers' older than March are not answers to the question in this post! Hello In my domain I need to track allocations of time spent on Activities by resources. There are two general types of Activities of interest - ones base on a Project and ones based on an Account. The notion of Project and Account have other features totally unrelated to both each other and capturing allocations of time, and each is modeled as a table in the database. For a given Allocation of time however, it makes sense to not care whether the allocation was made to either a Project or an Account, so an ActivityBase class abstracts away the difference. An ActivityBase is either a ProjectActivity or an AccountingActivity (object model is below). Back to the database though, there is no direct value in having tables for ProjectActivity and AccountingActivity. BUT the Allocation table needs to store something in the column for it's ActivityBase. Should that something be the Id of the Project / Account or a reference to tables for ProjectActivity / Accounting? How would the mapping look? === Current Db Mapping (Fluent) ==== Below is how the mapping currently looks: public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { //mapping.IgnoreProperty(x => x.BusinessId); //mapping.IgnoreProperty(x => x.Description); //mapping.IgnoreProperty(x => x.TotalTime); mapping.IgnoreProperty(x => x.UniqueId); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public void Override(AutoMapping<AccountingActivity> mapping) { mapping.References(x => x.Account); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public void Override(AutoMapping<ProjectActivity> mapping) { mapping.References(x => x.Project); } } There are two odd smells here. Firstly, the inheritance chain adds nothing in the way of properties - it simply adapts Projects and Accounts into a common interface so that either can be used in an Allocation. Secondly, the properties in the ActivityBase interface are redundant to keep in the db, since that information is available in Projects and Accounts. Cheers, Berryl ==== Domain ===== public class Allocation : Entity { ... public virtual ActivityBase Activity { get; private set; } ... } public abstract class ActivityBase : Entity { public virtual string BusinessId { get; protected set; } public virtual string Description { get; protected set; } public virtual ICollection<Allocation> Allocations { get { return _allocations.Values; } } public virtual TimeQuantity TotalTime { get { return TimeQuantity.Hours(Allocations.Sum(x => x.TimeSpent.Amount)); } } } public class ProjectActivity : ActivityBase { public virtual Project Project { get; private set; } public ProjectActivity(Project project) { BusinessId = project.Code.ToString(); Description = project.Description; Project = project; } }

    Read the article

  • Database cluster... without Master/Slave?

    - by RadiantHex
    Hi there! I'm wondering if it is possible to have a set of SQLdb servers to which data is written and have them replicate, avoiding conflicting information. I imagine that a Master/Slave structure would be mandatory, I would like to know if a system where servers have no hierarchy could support replication. Currently I'm using MySQL, but I would be happy to move to another database if needed. Any ideas? :)

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • How do I rename the Tfs_xxxConfiguration database for TFS 2010?

    - by Jaxidian
    When we installed TFS 2010, we made some poor decisions in naming our databases. I have figured out how to rename everything except for the Tfs_xxxConfiguration database. How can I get this renamed? I can only find a read-only display of the connection string to it - cannot find where to alter that connection string either directly or indirectly. I'd really prefer to not have to rebuild it from scratch... again... Thanks!

    Read the article

  • how many tables can an MS SQL database hold?

    - by Peter Turner
    I've ran into this cryptic statement for SQL Server: Files Per Database 32,767. What does that mean exactly? Is there a maximum number of tables for a given version of SQL Server. We try to support SQL Server post 2005 32-bit and 64-bit. So if anyone has a handy dandy table they use to figure out how many tables they can have per DB for Microsoft SQL Servers I'd heartily appreciate seeing it.

    Read the article

  • How to compile and build fast in Visual studio 2010

    - by anirudha
    Sometime Project have included many thing with a project.suppose a ASP.NET MVC project maybe included Test project for the project you run or have some more project who attach to the current project. it's take a long time while project is going to debug the reason for that is because project have many subproject or attached project then compilation of all maybe goes long. the solution is that build and debug current project instead of all. it's same time on compilation in Visual studio. for configure build only current project you need to configure it in Visual studio. click on the button and select Configuration manager choose the project who you currently worked and unchecked all other. After that Visual studio debugging goes faster.

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • How to connect MySQL database of local server in NetBeans 7.0.1 ( windows)?

    - by diEcho
    Hello All, I am using NetBeans IDE 7.0.1 on Windows 7 very first time for my php. Actually in my company there is a local server ( 192.168.1.99) where all projects resides and we access phpmyadmin of that local server, Although I have added my project folders with NetBeans (this was also very hectic) but now I am having problem to connect database of my local server as I can see 192.168.1.99/phpmyadmin through my browser. I have set below value Server Host : localhost, Server port number : 3306, Administrator username : keshav Administrator password : ****** and when i click on connect, a popup error windows appears with below text Unable to connect to the MySQL server: org.netbeans.api.db.explorer.DatabaseException: org.netbeans.api.db.explorer.DatabaseException: java.sql.SQLException: Access denied for user 'keshav'@'localhost' (using password: YES). The server may not be running or your MySQL connection properties may not be set correctly. Do you want to edit your MySQL connection properties? Please help me out. Thanks

    Read the article

  • Which open-source Scrum project management tool do you use?

    - by jumar
    I'm looking for an open-source Scrum project management tool for a small dev team (3 to 6 developers). I've been impressed by trac but I don't need its bug tracking feature as we already use Mantis. I'm having a look at iceScrum which seems feature-full and shiny but a bit cluttered. A solution that integrates into eclipse would be a plus.

    Read the article

  • Django: Setting up database code tables (aka reference tables, domain tables)?

    - by User
    Often times applications will need some database code tables (aka reference tables or domain tables or lookup tables). Suppose I have a model class called Status with a field called name that could hold values like: Canceled Pending InProgress Complete Where and at what point would I setup these values in Django? Its like a one time operation to setup these values in the database. Infrequently, these values could be added to.

    Read the article

  • Free or open source self hosted project management tools?

    - by titel
    Hi guys, Does anyone know a free self hosted alternative to Basecamp or Active Collab? Basic requirements would be projects, milestones, tasks, users, reminders, reports. I've sped some time using this tools but now we need a tool for a project we're running and we're looking for some free alternative. Thanks in advance for your time, Constantin Tovisi

    Read the article

  • Project Euler #18 - how to brute force all possible paths in tree-like structure using Python?

    - by euler user
    Am trying to learn Python the Atlantic way and am stuck on Project Euler #18. All of the stuff I can find on the web (and there's a LOT more googling that happened beyond that) is some variation on 'well you COULD brute force it, but here's a more elegant solution'... I get it, I totally do. There are really neat solutions out there, and I look forward to the day where the phrase 'acyclic graph' conjures up something more than a hazy, 1 megapixel resolution in my head. But I need to walk before I run here, see the state, and toy around with the brute force answer. So, question: how do I generate (enumerate?) all valid paths for the triangle in Project Euler #18 and store them in an appropriate python data structure? (A list of lists is my initial inclination?). I don't want the answer - I want to know how to brute force all the paths and store them into a data structure. Here's what I've got. I'm definitely looping over the data set wrong. The desired behavior would be to go 'depth first(?)' rather than just looping over each row ineffectually.. I read ch. 3 of Norvig's book but couldn't translate the psuedo-code. Tried reading over the AIMA python library for ch. 3 but it makes too many leaps. triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 4, 82, 47, 65], [19, 1, 23, 75, 3, 34], [88, 2, 77, 73, 7, 63, 67], [99, 65, 4, 28, 6, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23], ] def expand_node(r, c): return [[r+1,c+0],[r+1,c+1]] all_paths = [] my_path = [] for i in xrange(0, len(triangle)): for j in xrange(0, len(triangle[i])): print 'row ', i, ' and col ', j, ' value is ', triangle[i][j] ??my_path = somehow chain these together??? if my_path not in all_paths all_paths.append(my_path) Answers that avoid external libraries (like itertools) preferred.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >