Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 169/2640 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • IP address shows as a hyphen for failed remote desktop connections in Event Log

    - by PsychoDad
    I am trying to figure out why failed remote desktop connections (from Windows remote desktop) show the client ip address as a hyphen. Here is the event log I get when I type the wrong password for an account (the server is completely external to my home computer): <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-03-25T19:22:14.694177500Z" /> <EventRecordID>1658501</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="12880" /> <Channel>Security</Channel> <Computer>[Delete for Security Purposes]</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">[Delete for Security Purposes]</Data> <Data Name="TargetDomainName">[Delete for Security Purposes]</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp </Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MyComputer</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> Have found nothing online and am trying to stop terminal services attacks. Any insight is appreciated, I have found nothing online after several hours of seraching...

    Read the article

  • Is it bad practice to use an enum that maps to some seed data in a Database?

    - by skb
    I have a table in my database called "OrderItemType" which has about 5 records for the different OrderItemTypes in my system. Each OrderItem contains an OrderItemType, and this gives me referential integrity. In my middletier code, I also have an enum which matches the values in this table so that I can have business logic for the different types. My dev manager says he hates it when people do this, and I am not exactly sure why. Is there a better practice I should be following?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • What are alternatives to standard ORM in a data access layer?

    - by swampsjohn
    We're all familiar with basic ORM with relational databases: an object corresponds to a row and an attribute in that object to a column (or some slight variation), though many ORMs add a lot of bells and whistles. I'm wondering what other alternatives there are (besides raw access to the database or whatever you're working with). Alternatives that just work with relational databases would be great, but ones that could encapsulate multiple types of backends besides just SQL (such as flat files, RSS, NoSQL, etc.) would be even better. I'm more interested in ideas rather than specific implantations and what languages/platforms they work with, but please link to anything you think is interesting.

    Read the article

  • Repartition hard drive using Mac OS X, keep existing data

    - by Jonny
    I got a 1 TB disk a year or so ago and loaded it with some hundred of GB of data. I somehow neglected to check the file system, which turns out to be FAT-32 and thus too small for files bigger than 4 GB. So now I want to change it, without deleting the data. I thought I'd just make a new partition in the so far unused space. Then with the new partition, copy/move the data into the new partition, and then delete the old FAT-32 partition, and make the new partition bigger again... or just make a few more partitions. The critical step here is, can I make that new partition without ruining the data? The data should be fairly sequentially added to the start of the disk, but what do I know... so that's why I'm asking. Can I safely use Disk Utility for this? Any recommended file system?

    Read the article

  • Does MS Access update the data on the clipboard from a query when the data in the database changes?

    - by leeand00
    I was just debugging a macro in MS Access, and when it hit the breakpoint ran a query and I copied the data from it to the clipboard. Some of the values were null before stepping to the next step, then I ran the next step which ran a query which changed the data I had on the clipboard. I then pasted the data and the values that were null before had been changed by the query...leading to a rather large WTF on my part when I pasted the data. So my question is, does MSAccess update the data on the clipboard when it changes in the database? That's the only explanation I have for what occurred there.

    Read the article

  • filesize of large files in c

    - by endeavormac
    How can I get the filesize of a file in C when the filesize is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • Best practice to modularise a large Grails app?

    - by Mulone
    Hi all, A Grails app I'm working on is becoming pretty big, and it would be good to refactor it into several modules, so that we don't have to redeploy the whole thing every time. In your opinion, what is the best practice to split a Grails app in several modules? In particular I'd like to create a package of domain classes + relevant services and use it in the app as a module. Is this possible? Is it possible to do it with plugins? Cheers, Mulone

    Read the article

  • ActionBar SpinnerAdapter Large Branding followed by selection (spinner)

    - by SatanEnglish
    I'm trying to implement a spinner In the action bar that has brand Name above it. With the ActionBar setListNavigationCallbacks method if possible actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_LIST); actionBar.setListNavigationCallbacks(mSpinnerAdapter, null); Can anyone give me an Idea of how to do this? I would put some code here but I have no idea where to begin as I have not managed to find relevant information yet. Edit: Using V4.0

    Read the article

  • inserting large number of dates

    - by Radhe
    How can I insert all dates in an year(or more) in a table using sql My dates table has following structure dates(date1 date); Suppose I want to insert dates between "2009-01-01" to "2010-12-31" inclusive. Is there any sql query for the above?

    Read the article

  • TSQL, select values from large many-to-many relationship

    - by eugeneK
    I have two tables Publishers and Campaigns, both have similar many-to-many relationships with Countries,Regions,Languages and Categories. more info Publisher2Categories has publisherID and categoryID which are foreign keys to publisherID in Publishers and categoryID in Categories which are identity columns. On other side i have Campaigns2Categories with campaignID and categoryID columns which are foreign keys to campaignID in Campaigns and categoryID in Categories which again are identities. Same goes for Regions, Languages and Countries relationships I pass to query certain publisherID and want to get campaignIDs of Campaigns that have at least one equal to Publisher value from regions, countries, language or categories thanks

    Read the article

  • designing an ASP.NET MVC partial view - showing user choices within a large set of choices

    - by p.campbell
    Consider a partial view whose job is to render markup for a pizza order. The desire is to reuse this partial view in the Create, Details, and Update views. It will always be passed an IEnumerable<Topping>, and output a multitude of checkboxes. There are lots... maybe 40 in all (yes, that might smell). A-OK so far. Problem The question is around how to include the user's choices on the Details and Update views. From the datastore, we've got a List<ChosenTopping>. The goal is to have each checkbox set to true for each chosen topping. What's the easiest to read, or most maintainable way to achieve this? Potential Solutions Create a ViewModel with the List and List. Write out the checkboxes as per normal. While writing each, check whether the ToppingID exists in the list of ChosenTopping. Create a new ViewModel that's a hybrid of both. Perhaps call it DisplayTopping or similar. It would have property ID, Name and IsUserChosen. The respective controller methods for Create, Update, and Details would have to create this new collection with respect to the user's choices as they see fit. The Create controller method would basically set all to false so that it appears to be a blank slate. The real application isn't pizza, and the organization is a bit different from the fakeshot, but the concept is the same. Is it wise to reuse the control for the 3 different scenarios? How better can you display the list of options + the user's current choices? Would you use jQuery instead to show the user selections? Any other thoughts on the potential smell of splashing up a whole bunch of checkboxes?

    Read the article

  • Robust Large File Transfer with WCF

    - by Sharov
    I want to transfer big files (1GB) over unreliable transport channels. When connection is interrupted, I don't want start file transfering from the begining. I can partially store it in a temp table and store last readed position, so when connection is reestablished I can request continue uploading of file from this position. Is there any best-practice for such kind of things. I'm currently use chunking channel.

    Read the article

  • Internet explore is unresponsive while loading a large page

    - by kdhamane
    We have a html page being rendered in the browser (IE) that causes the browser to hang. The page is generated through server side script (ASP.NET and viewstate is disabled). The page while loading takes a long time (its not a b\w issue since we can reproduce it on local machine) and sometimes results in script unresponsive error. On debugging the issue we found that the html size on the client side is 4.73 MB. There's also a lot of DOM traversal (using JQuery) after document is ready (jquery-document.ready). After loading as well, the page simply hangs on any user interaction (scroll, mouseover) etc. A CPU usage spike (25-50% usage) is seen during loading and on any user interaction

    Read the article

  • How to get user input for 2 digit data

    - by oneMinute
    In a HTML form user is expect to fill / select some data and trigger an action probably a http-post. If your only requested data field is a "2 digit" you can use html text input element get some data. Then you want to make it useful; enable user easily select data from a 'html select' But not all of your data is well-ordered so eye-searching within these data is somehow cumbersome. Because your data is meaningful with its relations. If there is no primary key for foreign key "12" it should not be shown. Vice versa if this foreign key occurs a lot, then it has some weight and could be displayed with more importance. So, what will be your way? a) Use text input to get data and validate it with regex, javascript, ... b) Use some dropdown select. c) Any other way ? Any answer will appreciated :)

    Read the article

  • How can I make Excel correlate data from two data sets into a single graph?

    - by Tom Ritter
    I have two datasets, one being sparser than the other. They look like this: Data Set 1: 4 50 5 55 6 60 7 70 8 80 Data Set 2: 4 10 6 20 8 30 I have several hundred points instead of this few. I want them in the same graph, the X axis being 4-8, the y axis being 0-100ish, and two lines, one for each data set. What I get is two lines, not correlated at all along the X axis, and the X axis being labeled from one of the two datasets, with the labels being wrong for the other. The smaller data set is one-point-per-tick on the x axis, when I need it to skip ticks and actually line up with the other data set. Not married to excel, willing to try this in something else if it's free.

    Read the article

  • Load large images into Bitmap?

    - by GuyNoir
    I'm trying to make a basic application that displays an image from the camera, but I when I try to load the .jpg in from the sdcard with BitmapFactory.decodeFile, it returns null. It doesn't give an out of memory error which I find strange, but the exact same code works fine on smaller images. How does the generic gallery display huge pictures from the camera with so little memory?

    Read the article

  • Data structure for an ordered set with many defined subsets; retrieve subsets in same order

    - by Aaron
    I'm looking for an efficient way of storing an ordered list/set of items where: The order of items in the master set changes rapidly (subsets maintain the master set's order) Many subsets can be defined and retrieved The number of members in the master set grow rapidly Members are added to and removed from subsets frequently Must allow for somewhat efficient merging of any number of subsets Performance would ideally be biased toward retrieval of the first N items of any subset (or merged subset), and storage would be in-memory (and maybe eventually persistent on disk)

    Read the article

  • How do large sites accomplish row-level permissions?

    - by JayD3e
    So I am making a small site using cakephp, and my ACL is set up so that every time a piece of content is created, an ACL rule is created to link the owner of the piece of content to the actual content. This allows each owner to edit/delete their own content. This method just seems so inefficient, because there is an equivalent amount of ACL rules as content in the database. I was curious, how do big sites, with millions of pieces of content, solve this problem?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >