Search Results

Search found 70288 results on 2812 pages for 'custom data generator'.

Page 1612/2812 | < Previous Page | 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619  | Next Page >

  • How to assign correct permissions to both webserver and svn users ?

    - by Patrick
    I've an issue with files ownerships. I have a drupal website and the "files" folder needs to be owned by "www-data" in order to let the users to upload files with php. However I'm now using svn and I need all folders and files to be own by "svnuser" in order to work. So now, I guess I need to add both users to a group with proper permissions. I'm not sure what exactly to do, could you tell me what are the exact necessary steps ? thanks

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • admin-over-clients application

    - by azzido
    I have the same web application running on several different servers. Now I want a central place to administer everything in one web interface. What is the best way to do this? Should I provide a REST interface on every web application and let the admin application make all the calls? This seems like a common problem that's already been solved by smarter people than me. UPDATE: I want to change the application data per web application + see the results per web application

    Read the article

  • Easily Setup Fluent Nhibernate With Oracle

    Nowadays it is preferred to use ORM instead of old data access approaches. However, setting up an ORM like Fluent NHibernate with Oracle takes some time. With the help of NuGet you can setup such third party tools in no time. In this article I am going to to show how you can easily configure Fluent NHibernate with Oracle using NuGet. Moreover, the article will guide you in building a generic repository using Fluent NHibernate.

    Read the article

  • Fun with Outer Joins

    Learn how an outer join works and how you can use it in your applications to find the results you need when matching data isn't in all your tables. Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Best way to set up servers for .NET performance [migrated]

    - by msigman
    Assume we have 3 physical servers and let's say we are only interested in performance, and not reliability. Is it better to give each server a specific function or make them all duplicates and split the traffic between them? In other words dedicate 1 as DB server, 1 as web server, and 1 as reporting server/data warehouse, or better to put all three services on each server and use them as web farm?

    Read the article

  • Is a 1:* write:read thread system safe?

    - by Di-0xide
    Theoretically, thread-safe code should fix race conditions. Race conditions, as I understand it, occur because two threads attempt to write to the same location at the same time. However, what about a threading model in which a single thread is designed to write to a location, and several slave/worker threads simply read from the location? Assuming the value/timing at which they read the data isn't relevant/doesn't hinder the worker thread's outcome, wouldn't this be considered 'thread safe', or am I missing something in my logic?

    Read the article

  • Video Search Engine Optimization - Advanced Techniques For Best SEO Services

    Everyone is aware of the fact that a site whose configuration is best for the purpose of search engines is the one that attracts most of the internet traffic. People usually access such types of sites only. But not a single person on this planet has ever thought of optimizing the videos just like the optimization of the sites. As the video content is increasingly becoming more and more popular among the masses, so it is essential that the transmission of the data involves accurate optimization of the video search engine.

    Read the article

  • apt-get -f install removed software center and several other files

    - by user287858
    I ran sudo apt-get -f install and several files and programs were removed including software center. Is there a way to re-download everything as if ubuntu was new again without a cd? This computer does not have a cd-rom drive. I'd be fine with losing all the data on this computer. Also, when I run sudo apt-get install (almost anything) I get errors about dependencies and files not being available. Thanks to anyone who can help.

    Read the article

  • Running MS Access Programs

    - by fredhappier
    I have an old program developed in MS Access and would like to convert it to Kexi somehow. The program on Windows is launched with Access. Is there any way that Kexi can launch this program? I know my way around Ubuntu and the terminal, but not well versed on databases. Once you make something in Kexi how do you "run" it or "view" what you've made? So far I am able to import the MDB file into Kexi and see all of the database data, but that is as far I have gone. The program was made by a relative years ago for my dad. I myself am an Ubuntu only user for 6+ years now and have no intentions to touch Windows and am looking for a linux solution. My dad is also an Ubuntu user, hence why Im looking for a solution. If Kexi cannot launch and run an MDB file, what else can I try? Anything browser based? Any tips or direction would be extremely helpful. I spoke to my brother who originally made the program. I told him about Kexi and here is what he said. Does any of this make sense? Thanks. This is how I would try to get them to work: Stand alone setup - after import, look for an option where you designate which form object you want to open upon startup. It might be in the tools tab in the picture below. After you save that change, it re-start it and it should work. Front end/back end setup - Do what I suggested for the stand alone setup to the "front-end" MDB file. After you do that, put the other file (table MDB file) where you want it to reside on the network. Now, open back up the "front end" file and look for an option that will allow you to "connect" to those tables in the other file. It looks like it could be in the "External data" tab in the picture below. For this setup, you may need to do these two tasks in the reverse order I just mentioned. Thanks! Fred

    Read the article

  • What is the right way to group this project into classes?

    - by sigil
    I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: Get the user's FTP credentials (username & password). Check to make sure the credentials establish a valid connection to the FTP server. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. Each item in the list has a folder. For each item: Zip the contents of the folder. Upload the folder to the FTP server using SFTP Update the item's Sharepoint data. Email the user an Excel report showing, e.g., Items without folder paths Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: SFTP: file transfer processes DataHandler: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. Exceptions: Not just one class, this is the .cs file where I have all of my exception classes. Report: Builds and sends the report. Program: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the "wait for user input at any point in the process" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here.

    Read the article

  • Open space office for team work? [closed]

    - by pboy
    An argument I often hear to justify open space office layout is that, being open, it contributes to team work and more collaboration between people. Does it really contributes to team work, compared to private offices? Is there hard data that might support this? Edit: I'm interested in that topic in a programmer's context, a bit like the study made in PeopleWare, which focuses on software development.

    Read the article

  • Feature: Lead with Intelligence

    Business efficiency depends on business decisions, and business decisions depend on current, accurate information and powerful analysis. See how Oracle data warehousing, business intelligence, and enterprise performance management solutions deliver the information, analysis, and efficiencies to propel your business ahead of the competition.

    Read the article

  • Oracle Magazine, November/December 2005

    Oracle Magazine November/December 2005 features articles on the 2005 Editors' Choice Awards, the Enterprise Grid Alliance, Oracle AWM 10g, Oracle Developer Tools for .NET, Oracle HTML DB, Oracle Data Provider for .NET, Oracle JDeveloper, Oracle ADF, and much more.

    Read the article

  • Supporting Large Scale Team Development

    With a large-scale development of a database application, the task of supporting a large number of development and test databases, keeping them up to date with different builds can soon become ridiculously complex and costly. Grant Fritchey demonstrates a novel solution that can reduce the storage requirements enormously, and allow individual developers to work on thir own version, using a full set of data.

    Read the article

  • How to pronounce "std" as in "std::vector"

    - by Lex Fridman
    In C++, the STL (standard template library) includes a namespace std that contains the many data structures and algorithms that we all know and love. I've always pronounced this namespace just like sexually transmitted diseases: S T D. But then I listened to this excellent series of lectures by Stephan T. Lavavej and he pronounces it "stood". Which is the "correct" pronunciation or at least what is that most commonly used one?

    Read the article

  • To Obtain EPOCH Time Value from a Packed BIT Structure in C [migrated]

    - by xde0037
    This is not a home assignment! We have a binary data file which has following data structure: (It is a 12 byte structure): I need to find out Epoch time value(total time value is packed in 42 bits as described below): Field-1 : Byte 1, Byte 2, + 6 Bits from Byte 3 Time-1 : 2 Bits from Byte 3 + Byte 4 Time-2 : Byte 5, Byte 6, Byte 7, Byte 8 Field-2 : Byte 9, Byte 10, Byte 11, Byte 12 For Field-1 and Field-2 I do not have issue as they can be taken out easily. I need time value in Epoch Time (long) as it has been packed in Bytes 5,6,7,8 and 3 and 4 as follows: (the bit structure for the time value is as follows): Bytes 5 to 8 (32 bit word) Packs time value bits from 0 thru 31 (byte 5 has 0 to 7 bits, byte 6 has 8 to 15, byte 7 has 16 to 23, byte 8 has 24 to 31). the remaining 10 bits of time value are packed in Bytes 3 and byte 4 as follows: byte 3 has 2 bits:32 and 33, and Byte 4 has remaining bits : 34 to 41. So total bits for time value is 42 bits, packed as above. I need to compute epoch value coming out of these 42 bits. How do I do it? I have done something like this but not sure it gives me correct value: typedef struct P_HEADER { unsigned int tmuNumber : 21; unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4 unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8 unsigned int traceKey : 32; } __attribute__((__packed__)) P_HEADER; Then in the code : P_HEADER *header1; //get input string in hexa,etc..etc.. //parse the input with the header as : header1 = (P_HEADER *)inputBuf; // then print the header1->time1, header1->time2 .... long ttime = header1->time1|header1->time2; //?? is this the way to get values out? Any hint tip will be appreciated. Environment is : gcc 4.1, Linux Thanks in advance.

    Read the article

  • Oracle Magazine, March/April 2008

    Oracle Magazine March/April features articles on IT modernization, Marvel Entertainment, SQL performance analyzer, Oracle SQL Developer, upgrade certification to Oracle Database 11g, Oracle Database 11g features, declarative data filters, Oracle Application Express, PL/SQL best practices, and much more.

    Read the article

< Previous Page | 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619  | Next Page >