Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 497/807 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Print full path of files and sizes with find in Linux

    - by cat pants
    Here are the specs: Find all files in / modified after the modification time of /tmp/test, exclude /proc and /sys from the search, and print the full path of the file along with human readable size. Here is what I have so far: find / \( -path /proc -o -path /sys \) -prune -o -newer /tmp/test -exec ls -lh {} \; | less The issue is that the full path doesn't get printed. Unfortunately, ls doesn't support printing the full path! And all solutions I have found that show how to print the full path suggest using find. :| Any ideas? Thanks!

    Read the article

  • Announcing Fusion Middleware Innovation Awards

    - by Michelle Kimihira
    Author: Neela Chaudhari Every year at OpenWorld, Oracle announces the winners to its most prestigious awards in Middleware, the Fusion Middleware Innovation Awards. This year, we’ll be announcing the winners and highlighting a few of their original implementations during this key session in the Middleware stream: 11:45 AM on Tuesday, October 2nd, CON9162 Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects in Moscone West, 3001. In addition, we’ll give a sneak peak of a few winners during GEN9394: Fusion Middleware General Session with Hasan Rizvi at 10:15 AM on Tuesday, October 2nd in Moscone West, Hall D! What kinds of customers win the Fusion Middleware Innovation Awards? Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The winners are selected by a panel of judges that score each entry across multiple different scoring categories. This year, the following categories included: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics) Last year at OpenWorld 2011 we had standing room only in our session, so come early!  We had over 30 innovative customers that won the award, including companies like BT, Choice Hotels, Electronic Arts, Clorox Company, ING, Dunkin Brands, Telenor, Haier, AT&T, Manpower, Herbal Life and many others. Did you miss your chance this year to nominate your company? Come join with us in the awards session to get an edge in your next year’s submission and watch for the next opportunity for 2013 on this blog. There’s other awards as part of Oracle’s Excellence awards program or subscribe to our regular Fusion Middleware newsletter to get the latest reminders.

    Read the article

  • More RAM in motherboard than CPU can process

    - by Videographer Francisco Ferrer
    As I understand reading in the forum post :" By making more memory available to the system more data can be cached in RAM, so there will be less hard drive activity, and less swapping to memory so your system will perform better." But what happen when motherboard for a desktop that supports( and had install) more RAM than the procesor can handle?(aka:procesor Max Memory Size). Is there any advantage to have a MB that hold a bigger RAM capacity than the CPU? If so, how can this be translate using in a program like Adobe After Effects, RAM preview? thanks

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard By Nitesh Jain

    - by JuergenKress
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes. Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals. Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint. It also shows the detailed view of a specific Endpoint. The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message. For more information please visit Nitesh Jain blog SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Suite,SOA heathcare,soa health,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Using NDMP as an alternative to CIFS mount

    - by user138922
    I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc. I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here. Is this even something which is achievable? Can I just transfer share that interest me instead of entire volume?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I make OpenGL textures scale without becoming blurry?

    - by adorablepuppy
    I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object: public void render(){ Color.white.bind(); this.spriteSheet.bind(); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(getDrawingWidth(), this.y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(this.x, getDrawingHeight()); GL11.glEnd(); } And here is code from my initGL method in my game class GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(0.46f,0.46f,0.90f,1.0f); GL11.glViewport(0,0,width,height); GL11.glOrtho(0,width,height,0,1,-1); And here is the code that does the actual drawing public void start(){ initGL(800,600); init(); while(true){ GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); for(int i=0;i<entities.size();i++){ ((RenderableEntity)entities.get(i)).render(); } Display.update(); Display.sync(100); if(Display.isCloseRequested()){ Display.destroy(); System.exit(0); } } }

    Read the article

  • CD/DVD burn error in ImgBurn and Nero

    - by bobby
    I am getting the errors shown below when I try to burn a CD/DVD on my DVD writer. I am seeing this error for every CD/DVD I try to burn. I am not able to write any CDs or DVDs using ImgBurn. The burn log below is a failed burn in Nero. What could be causing this error? Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: <HL-DT-ST DVDRAM GSA-H12N> Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: <IDE> HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: <ATAPI-CD ROM-DRIVE-52MAX > Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: <IDE> HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 >>> Protocol of DlgWaitCD activities: <<< ========================================= 10:43:02 AM #8 Text 0 File ThreadedTransferInterface.cpp, Line 785 Nero Report 1 Nero Burning ROM Setup items (after recorder preparation) 0: TRM_DATA_MODE1 (2 - CD-ROM Mode 1, Joliet) 2 indices, index0 (150) not provided original disc pos #0 + 318784 (318784) = #318784/70:50.34 not relocatable, disc pos for caching/writing not required/not required -> TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blocks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes (<64KB) 10:43:02 AM #12 Phase 24 File dlgbrnst.cpp, Line 1767 Caching of files started 10:43:02 AM #13 Text 0 File Burncd.cpp, Line 4405 Cache writing successful. 10:43:02 AM #14 Phase 25 File dlgbrnst.cpp, Line 1767 Caching of files completed 10:43:02 AM #15 Phase 36 File dlgbrnst.cpp, Line 1767 Burn process started at 48x (7,200 KB/s) 10:43:02 AM #16 Text 0 File ThreadedTransferInterface.cpp, Line 2733 Verifying disc position of item 0 (not relocatable, no disc pos, no patch infos, orig at #0): write at #0 10:43:02 AM #17 Text 0 File MMC.cpp, Line 17806 StartDAO : CD-Text - Off 10:43:02 AM #18 Text 0 File MMC.cpp, Line 22488 Set BUFE: Buffer underrun protection -> ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • SQL SERVER – 2011 – Multi-Monitor SSMS Windows

    - by pinaldave
    I have a dual screen arrangement at my home system. I love it because it’s very convenient. When I am working with SQL Server 2008 R2 or any earlier versions, I would want to use both of the Monitor so I open two separate SQL Server Management Studio and work along with it. I have no complaints with my system, at all. I am totally fine with it. However, sometimes I face small issues, like when I just want a small code open in a separate window but I do not want the windows to take over the whole of another window. But then again, I am already used to this current system. Recently when I was working with SQL Server 2011 ‘Denali’ CTP1, I dragged one of the windows by accident, and suddenly it magically appeared out of its ‘Shell’ of SSMS and was appearing on a separate monitor. I played around a bit and figured out that SSMS now supports multi-monitor (or multi screen) support with single SSMS instance. We can now drag out and drag in any window and resize them at any size. Fantastic! If you are multi-monitor user, I am sure you will like this feature. This leads me to ask you question? Do you use multi-monitor system while working with SQL Server? Leave a quick comment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • KVM CLI install for CentOS 6.3 defaults to Minimal Install

    - by i.h4d35
    So I now I've installed KVM (and its associated tools and packages- libvirt, VMM etc.). On the GUI (i.e using the VMM), installation works as its supposed to. However, when I try to create a VM using the command line interface, the OS (I am working with CentOS 6.3) defaults to a Minimal Install instead of giving me options to choose from at the time of installation. I am trying to install using the following command: virt-install \ --connect qemu:///system \ --virt-type kvm --name testVM2 \ --ram 512 --disk path=/var/lib/libvirt/images/testVM2.img,size=8 --vnc \ --cdrom /media/db18de8e-0853-49fb-80de-5c794d58a46f/CentOS-6.3- x86_64-bin-DVD1.iso \ --network network=default Specifying the OS-type or the OS-variant parameters doesn't make a difference. Is there something that I am missing out on or some other parameter that I must specify? Thanks in advance.

    Read the article

  • Can i enlarge os c drive of my windows 8?

    - by Sorgatz
    Last year I got a new Western Digital WD Blue 500GB HDD to replace my old drive. The first thing I did was to install latest Windows 8. While installing Windows 8 I created 3 partitions, C drive for the OS and others for storage. The OS partition is 120GB (which at the time I thought would be plenty big) but I'm now realizing its too small! I wonder if it's possible to re-size HDD partition without reformatting and re-install my Windows 8. So that is my question, Can i enlarge os c drive of my windows 8 without having to re-format? I've used the Norton Partition Magic and Disk Management to make this happen but there doesn't seem to be any options to make it happen. Thanks for any help you guys can give regarding my question. I've worked hard to optimize my current install of Windows 8 and would hate to start all over again.

    Read the article

  • Changed folder contents not updating in Finder (OS X 10.8)

    - by speedofmac
    I've been having this problem for a number of days now. When I update the contents of a folder using terminal commands, by decompressing archives, or by using "Save As", the affected folder often fails to reflect these changes. Sometimes it takes quite a while for any files to be shown, even if the total size of the folder is fairly small (< 10 MB). I'm running an '09 MacBook Pro 13", so it isn't the newest system, but it certainly has enough oomph to display a list of files in a folder. Does anyone know what might be causing this?

    Read the article

  • How do I execute a command before kickstart parses ks.cfg?

    - by Crazy Chenz
    How do I execute a command before kickstart parses ks.cfg? My specific problem is that I want to install redhat into a tmpfs by telling kickstart: part / --fstype ext3 --size 1000 --maxsize 4000 --ondisk loop1 I've tried doing: %pre #!/bin/sh mkdir /tmp-root mount -t tmpfs tmpfs /tmp-root dd if=/dev/zero of=/tmp-root/tmp-root.img bs=4096 count=1000000 losetup /dev/loop1 /tmp-root/tmp-root.img but that is not done early enough. Ugh! Update: I'm beginning to think it has nothing to do with being done early enough. I believe it has to do with anaconda and kudzu not thinking that a loopback device is a valid device. I'm not a python guy, so the idea of hacking up the kickstart code sucks! -Vinnie

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • Why is (free_space + used_space) != total_size in df? [migrated]

    - by Timothy Jones
    I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one.

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • CodePlex Daily Summary for Sunday, December 05, 2010

    CodePlex Daily Summary for Sunday, December 05, 2010Popular ReleasesSubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Winware: Winware 3.0 (.Net 4.0): Winware 3.0 is base on .Net 4.0 with C#. Please open it with Visual Studio 2010. This release contains a lab web application.UltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.New ProjectsBambook???: ????Bambook???????。Beespot: Beespot is an easy to use, secure, robust and powerful Honeypot for the SSH Service written in Python. caitanzhangDemo: this is my demoColorPicker [SA:MP]: ColorPicker [SA:MP] is a simple tool that generates: - PAWN Hex Color Codes (useful for SAMP Scripts); - ARGB Color Codes; - HTML Color Codes; It's developed in C#.Conversions-n-Stuff: Conversions-n-Stuff (CNS) is a program focused on making it easier for anyone to convert from one measurement to another. There is no need to know the calculations and formulas! Just fill in the forms and click, and you have your answer! CNS leverages C#, WPF, and Silverlight.dotP2P: dotP2P would consist of servers running caches to keep track of domain and nameserver records. Cache servers can be created with any server that supports XML-RPC or SOAP. MySQL is used to store the the cache data. EmailMasterTemplate: This user master and child user control based email template engine.Ezekiel: Ezekiel is a Windows application that leverages a user's existing BusinessObjects reports to provide a custom read-only front end for a database. It's developed using Visual C# 2010 Express.F# Colorizer Editor: Standalone Colorizer Editor for Brian's Fsharp Deep Colorizer VS ExtensionGameCore: Core engine for game services for mobile and RIA clientsGestão de contas bancárias: Trabalho final de Matematica Aplicada da UATLAGPP: GPPkmean: Kmeans ClusteringPHP ORM: ??????orm???,??PHP????,??????????????!Steampunk Odyssey: Steampunk Odyssey is a side-scrolling action game based on the XNA platformSubtitleTools: SubtitleTools is a small utility that helps modifying existing subtitles or downloading new ones based on the digital signatures of your movie files from opensubtitles.org site.Windows Phone 7 Accelerometer: Accelerometer for Windows Phone 7???: ????????????????????????

    Read the article

  • How to set WAN side buffers for WRT54GL running Tomato Firmware

    - by Vickash
    I've recently set up a machine running m0n0wall to try and fight buffer bloat and do some traffic shaping. It was more convenient (geographically speaking) to connect the cable modem directly to my old WRT54GL, then pass everything to the m0n0wall machine and have that do the real routing work. It took a bit of work, but it's working pretty well. I have a cable connection. I have m0n0wall set up to utilize only 90% of the specified speed of my subscription, which is fine. But I've noticed that at certain times of the day (possibly when my true bandwidth drops below that 90%), there's more latency if the connection is used heavily, and traffic shaping doesn't seem to work as well. I suspect this is caused by the buffers on the WRT54GL still being unnecessarily large. If the connection is working as expected, they won't get filled, but in times of reduced bandwidth they would. Does anyone know the command I need to execute, on the WRT54GL running Tomato Firmware, to reduce the buffers on the WAN interface to the minimum size possible?

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • Options to optimize lotus notes 8.5.X

    - by Jakub
    Has anyone found actually useful optimization methods for the bulky, fat, eclipse giant, nuissance that is Lotus Notes 8.5? I want it to be fast, and not eat up system resources like crazy while I run it ALL day (as it is my company's corporate mail / cal / scheduling solution). I've tried various hacks for the JVM heap size (if I recall correctly). None really bring a performance improvement. I have a dual core cpu, if that helps (I tried going the route of optimizing JAVA for 2 cores in hopes it would work, but seen no speed improvement). Notes is just sooo bloated, anyone have any suggestions to optimize/mod this thing so it is more responsive and less of a resource hog. Note: I don't want to switch to the web version, or the standard stripped down versions, I am aware of those, I just cannot since we don't run those internally for the company.

    Read the article

  • Cannot resize OS X partition

    - by joshhunt
    I am trying to resize my existing Mac OS Extended partition on my Macbook to install Windows 7 (using steps similar to these), but when ever I go to apply the changes, I get this error: Partition failed Partition failed with the error: The partition cannot be resized. Try reducing the amount of change in the size of the partition. The total capacity of the hard drive in question is 260GB, with the entirety being taken up by the OS X boot partition. There is I am aiming to shrink that partition down to 60GB. How can I fix this problem? I have been reducing the amount of change by 10GB each attempt, but it still is not working. I assume the problem is that there is not a large amount of continuous space on the device. Is there some way to can do a manual defrag that would rectify this problem?

    Read the article

  • Disabling weak ciphers on Windows 2003

    - by Kev
    For PCI-DSS compliance you have to disable weak ciphers. PCI-DSS permits a minimum cipher size of 128 bits. However for the highest score (0 I believe) you should only accept 168 bit ciphers but you can still be compliant if you permit 128 bit ciphers. The trouble is that when we disable all but 168 bit encryption it seems to disable both inbound and out bound secure channels. For example we'd like to lock down inbound IIS HTTPS to 168 bit ciphers but permit outbound 128 bit SSL connections to payment gateways/services from service applications running on the server (not all payment gateways support 168 bit only we just found out today). Is it possible to have cipher asymmetry on Windows 2003? I am told it is all or nothing.

    Read the article

  • Increasing FreeBSD threads

    - by sh-beta
    For network apps that create one thread per connection (like Pound), threadcount can become a bottleneck on the number of concurrent connections you can server. I'm running FreeBSD 8 x64: $ sysctl kern.maxproc kern.maxproc: 6164 $ sysctl kern.threads.max_threads_per_proc kern.threads.max_threads_per_proc: 1500 $ limits Resource limits (current): cputime infinity secs filesize infinity kB datasize 33554432 kB stacksize 524288 kB coredumpsize infinity kB memoryuse infinity kB memorylocked infinity kB maxprocesses 5547 openfiles 200000 sbsize infinity bytes vmemoryuse infinity kB pseudo-terminals infinity swapuse infinity kB I want to increase kern.threads.max_threads_per_proc to 4096. Assuming each thread starts with a stack size of 512k, what else do I need to change to ensure that I don't hose my machine?

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >