arcpy.Select_analysis choosing only a fraction of the record The 2019 Stack Overflow Developer Survey Results Are InSelecting ArcSDE polygon by point in ArcGIS Desktop using ArcPy?Splitting job on ArcPy SearchCursor?What's the best way to get the OBJECTID name?Assigning a vector to a field in feature class (UpdateCursor)Randomly assign a value to a specificed number of rows in a feature class in ArcGISIs there a faster way in python of finding the smallest number in a field?Fastest way to write large numpy arrays to feature class: as rows, or as columns?Extracting attributes of points within overlapping polygons in ArcPy?Identify duplicate attributes and keep value of earliest recordCreating separate shapefiles for each attribute record using loop in ArcPy?

Does HR tell a hiring manager about salary negotiations?

Worn-tile Scrabble

Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past

Why don't hard Brexiteers insist on a hard border to prevent illegal immigration after Brexit?

Can I have a signal generator on while it's not connected?

Why are there uneven bright areas in this photo of black hole?

How to translate "being like"?

Correct punctuation for showing a character's confusion

What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

Can there be female White Walkers?

I am an eight letter word. What am I?

Is an up-to-date browser secure on an out-of-date OS?

Can a flute soloist sit?

Why couldn't they take pictures of a closer black hole?

Deal with toxic manager when you can't quit

Identify boardgame from Big movie

What information about me do stores get via my credit card?

For what reasons would an animal species NOT cross a *horizontal* land bridge?

Can you cast a spell on someone in the Ethereal Plane, if you are on the Material Plane and have the True Seeing spell active?

Is it okay to consider publishing in my first year of PhD?

How to obtain a position of last non-zero element

What is this business jet?

Dropping list elements from nested list after evaluation

writing variables above the numbers in tikz picture



arcpy.Select_analysis choosing only a fraction of the record



The 2019 Stack Overflow Developer Survey Results Are InSelecting ArcSDE polygon by point in ArcGIS Desktop using ArcPy?Splitting job on ArcPy SearchCursor?What's the best way to get the OBJECTID name?Assigning a vector to a field in feature class (UpdateCursor)Randomly assign a value to a specificed number of rows in a feature class in ArcGISIs there a faster way in python of finding the smallest number in a field?Fastest way to write large numpy arrays to feature class: as rows, or as columns?Extracting attributes of points within overlapping polygons in ArcPy?Identify duplicate attributes and keep value of earliest recordCreating separate shapefiles for each attribute record using loop in ArcPy?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1















I'm trying to divide up a feature class into a number of smaller sets. I have been using Select_analysis with objectid to divide up the featureclass so far but know that this is not the best method.



So for example, currently I have something like:



query = "objectid > 0 + AND objectid <= 1000"

arcpy.Select_analysis(r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours",r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours_Mereged",query_statement)


What I want to know is if I can use the number of rows in the feature class to generate the query rather than relying on the objectid? So say I get the number of rows of the feature class using arcpy.GetCount_management(featureclass) and it outputs 1.9 million, is there a way I can divide that 1.9 million into 10 parts based on the count rather than objectid?










share|improve this question




























    1















    I'm trying to divide up a feature class into a number of smaller sets. I have been using Select_analysis with objectid to divide up the featureclass so far but know that this is not the best method.



    So for example, currently I have something like:



    query = "objectid > 0 + AND objectid <= 1000"

    arcpy.Select_analysis(r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours",r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours_Mereged",query_statement)


    What I want to know is if I can use the number of rows in the feature class to generate the query rather than relying on the objectid? So say I get the number of rows of the feature class using arcpy.GetCount_management(featureclass) and it outputs 1.9 million, is there a way I can divide that 1.9 million into 10 parts based on the count rather than objectid?










    share|improve this question
























      1












      1








      1








      I'm trying to divide up a feature class into a number of smaller sets. I have been using Select_analysis with objectid to divide up the featureclass so far but know that this is not the best method.



      So for example, currently I have something like:



      query = "objectid > 0 + AND objectid <= 1000"

      arcpy.Select_analysis(r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours",r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours_Mereged",query_statement)


      What I want to know is if I can use the number of rows in the feature class to generate the query rather than relying on the objectid? So say I get the number of rows of the feature class using arcpy.GetCount_management(featureclass) and it outputs 1.9 million, is there a way I can divide that 1.9 million into 10 parts based on the count rather than objectid?










      share|improve this question














      I'm trying to divide up a feature class into a number of smaller sets. I have been using Select_analysis with objectid to divide up the featureclass so far but know that this is not the best method.



      So for example, currently I have something like:



      query = "objectid > 0 + AND objectid <= 1000"

      arcpy.Select_analysis(r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours",r"C:UsersCHOKDesktopmulithreadingmultiprocessing.gdbContours_Mereged",query_statement)


      What I want to know is if I can use the number of rows in the feature class to generate the query rather than relying on the objectid? So say I get the number of rows of the feature class using arcpy.GetCount_management(featureclass) and it outputs 1.9 million, is there a way I can divide that 1.9 million into 10 parts based on the count rather than objectid?







      arcpy objectid






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 8 at 3:29









      AndyAndy

      191




      191




















          2 Answers
          2






          active

          oldest

          votes


















          1














          I'm not aware of any way to do what you're asking directly. But a work around might be...



          Can you add a new field to the feature class? If so, add an integer field, and use the field calculator to add sequential numbers to this field for each feature. One way to do this is explained here: https://support.esri.com/en/technical-article/000011137



          You should modify this code to reset to 1 each time it reaches the number of datasets required. Eg,



          sets=10
          rec=0
          def autoIncrement():
          global rec
          global sets
          pStart = 1
          pInterval = 1
          if (rec == 0 or rec > sets):
          rec = pStart
          else:
          rec += pInterval
          return rec


          Then you can use Split By Attributes to split into multiple feature classes based on the value of this field (which would be 1-10 for 10 sets). See: http://desktop.arcgis.com/en/arcmap/latest/tools/analysis-toolbox/split-by-attributes.htm



          Of course, if it is a dynamic set of data, you would have to recalculate the field each time you wanted to do the same operation again.



          Note that the code for the field calculator above would not have the consecutive records kept together (ie, not the first 10% of records, but 1 record in every 10). If you need the consecutive records kept together, this could be done with a tweak to the above code.






          share|improve this answer




















          • 1





            Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

            – smiller
            Apr 8 at 4:29












          • I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

            – Son of a Beach
            Apr 8 at 4:45


















          1














          Creating and populating new field could significantly slow things down.



          This is why I tested 2 alternatives below on 150k long point dataset. They use similar approach - splitting the list of OBJECTIDs into equal size chunks:



          import arcpy, time
          from arcpy import env
          env.overwriteOutput = True

          infc="many_points"
          OIDs=[row[0] for row in arcpy.da.TableToNumPyArray(infc,"objectid")]
          n=len(OIDs)/10
          t0 = time.time()
          N=65
          for i in xrange(0, len(OIDs), n):
          chunk=OIDs[i:i + n]
          q='OBJECTID >=%i AND OBJECTID <=%i' %(chunk[0],chunk[-1])
          arcpy.Select_analysis(infc, "C:/scratch/scratch.gdb/%s"%chr(N), q)
          N+=1
          arcpy.AddMessage("Seconds %i using query" %int(time.time()-t0))

          t0 = time.time()
          mxd = arcpy.mapping.MapDocument("CURRENT")
          lyr = arcpy.mapping.ListLayers(mxd,infc)[0]
          N=65
          for i in xrange(0, len(OIDs), n):
          chunk=OIDs[i:i + n]
          lyr.setSelectionSet ("NEW",chunk)
          arcpy.CopyFeatures_management(lyr, "C:/scratch/scratch.gdb/%s"%chr(N))
          N+=1
          arcpy.AddMessage("Seconds %i using layer selection" %int(time.time()-t0))


          OUTPUT:



          Seconds 57 using query
          Seconds 34 using using layer selection


          It seems using setSelectionSet method on a layer coupled with Copy features works much faster than Select with query.



          Note: Populating integer field in this sample set took 21 second.






          share|improve this answer























            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "79"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f318065%2farcpy-select-analysis-choosing-only-a-fraction-of-the-record%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            I'm not aware of any way to do what you're asking directly. But a work around might be...



            Can you add a new field to the feature class? If so, add an integer field, and use the field calculator to add sequential numbers to this field for each feature. One way to do this is explained here: https://support.esri.com/en/technical-article/000011137



            You should modify this code to reset to 1 each time it reaches the number of datasets required. Eg,



            sets=10
            rec=0
            def autoIncrement():
            global rec
            global sets
            pStart = 1
            pInterval = 1
            if (rec == 0 or rec > sets):
            rec = pStart
            else:
            rec += pInterval
            return rec


            Then you can use Split By Attributes to split into multiple feature classes based on the value of this field (which would be 1-10 for 10 sets). See: http://desktop.arcgis.com/en/arcmap/latest/tools/analysis-toolbox/split-by-attributes.htm



            Of course, if it is a dynamic set of data, you would have to recalculate the field each time you wanted to do the same operation again.



            Note that the code for the field calculator above would not have the consecutive records kept together (ie, not the first 10% of records, but 1 record in every 10). If you need the consecutive records kept together, this could be done with a tweak to the above code.






            share|improve this answer




















            • 1





              Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

              – smiller
              Apr 8 at 4:29












            • I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

              – Son of a Beach
              Apr 8 at 4:45















            1














            I'm not aware of any way to do what you're asking directly. But a work around might be...



            Can you add a new field to the feature class? If so, add an integer field, and use the field calculator to add sequential numbers to this field for each feature. One way to do this is explained here: https://support.esri.com/en/technical-article/000011137



            You should modify this code to reset to 1 each time it reaches the number of datasets required. Eg,



            sets=10
            rec=0
            def autoIncrement():
            global rec
            global sets
            pStart = 1
            pInterval = 1
            if (rec == 0 or rec > sets):
            rec = pStart
            else:
            rec += pInterval
            return rec


            Then you can use Split By Attributes to split into multiple feature classes based on the value of this field (which would be 1-10 for 10 sets). See: http://desktop.arcgis.com/en/arcmap/latest/tools/analysis-toolbox/split-by-attributes.htm



            Of course, if it is a dynamic set of data, you would have to recalculate the field each time you wanted to do the same operation again.



            Note that the code for the field calculator above would not have the consecutive records kept together (ie, not the first 10% of records, but 1 record in every 10). If you need the consecutive records kept together, this could be done with a tweak to the above code.






            share|improve this answer




















            • 1





              Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

              – smiller
              Apr 8 at 4:29












            • I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

              – Son of a Beach
              Apr 8 at 4:45













            1












            1








            1







            I'm not aware of any way to do what you're asking directly. But a work around might be...



            Can you add a new field to the feature class? If so, add an integer field, and use the field calculator to add sequential numbers to this field for each feature. One way to do this is explained here: https://support.esri.com/en/technical-article/000011137



            You should modify this code to reset to 1 each time it reaches the number of datasets required. Eg,



            sets=10
            rec=0
            def autoIncrement():
            global rec
            global sets
            pStart = 1
            pInterval = 1
            if (rec == 0 or rec > sets):
            rec = pStart
            else:
            rec += pInterval
            return rec


            Then you can use Split By Attributes to split into multiple feature classes based on the value of this field (which would be 1-10 for 10 sets). See: http://desktop.arcgis.com/en/arcmap/latest/tools/analysis-toolbox/split-by-attributes.htm



            Of course, if it is a dynamic set of data, you would have to recalculate the field each time you wanted to do the same operation again.



            Note that the code for the field calculator above would not have the consecutive records kept together (ie, not the first 10% of records, but 1 record in every 10). If you need the consecutive records kept together, this could be done with a tweak to the above code.






            share|improve this answer















            I'm not aware of any way to do what you're asking directly. But a work around might be...



            Can you add a new field to the feature class? If so, add an integer field, and use the field calculator to add sequential numbers to this field for each feature. One way to do this is explained here: https://support.esri.com/en/technical-article/000011137



            You should modify this code to reset to 1 each time it reaches the number of datasets required. Eg,



            sets=10
            rec=0
            def autoIncrement():
            global rec
            global sets
            pStart = 1
            pInterval = 1
            if (rec == 0 or rec > sets):
            rec = pStart
            else:
            rec += pInterval
            return rec


            Then you can use Split By Attributes to split into multiple feature classes based on the value of this field (which would be 1-10 for 10 sets). See: http://desktop.arcgis.com/en/arcmap/latest/tools/analysis-toolbox/split-by-attributes.htm



            Of course, if it is a dynamic set of data, you would have to recalculate the field each time you wanted to do the same operation again.



            Note that the code for the field calculator above would not have the consecutive records kept together (ie, not the first 10% of records, but 1 record in every 10). If you need the consecutive records kept together, this could be done with a tweak to the above code.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Apr 8 at 4:52

























            answered Apr 8 at 4:17









            Son of a BeachSon of a Beach

            1,581719




            1,581719







            • 1





              Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

              – smiller
              Apr 8 at 4:29












            • I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

              – Son of a Beach
              Apr 8 at 4:45












            • 1





              Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

              – smiller
              Apr 8 at 4:29












            • I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

              – Son of a Beach
              Apr 8 at 4:45







            1




            1





            Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

            – smiller
            Apr 8 at 4:29






            Building off this, in arcpy you could set up code to generate a new field (doesn't need to be LONG) and populate using da.updateCursor a unique value every x rows, where x is calculated based on initial count. Then split the dataset using Split Layer By Attributes. The advantage here is that you don't need to iterate through selections and can use the split tool to break up the feature class; the disadvantage to both this and Son of a Beach's is the time to update the field.

            – smiller
            Apr 8 at 4:29














            I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

            – Son of a Beach
            Apr 8 at 4:45





            I have updated the answer to use Split By Attributes, as recommended by @smiller. Good idea, that.

            – Son of a Beach
            Apr 8 at 4:45













            1














            Creating and populating new field could significantly slow things down.



            This is why I tested 2 alternatives below on 150k long point dataset. They use similar approach - splitting the list of OBJECTIDs into equal size chunks:



            import arcpy, time
            from arcpy import env
            env.overwriteOutput = True

            infc="many_points"
            OIDs=[row[0] for row in arcpy.da.TableToNumPyArray(infc,"objectid")]
            n=len(OIDs)/10
            t0 = time.time()
            N=65
            for i in xrange(0, len(OIDs), n):
            chunk=OIDs[i:i + n]
            q='OBJECTID >=%i AND OBJECTID <=%i' %(chunk[0],chunk[-1])
            arcpy.Select_analysis(infc, "C:/scratch/scratch.gdb/%s"%chr(N), q)
            N+=1
            arcpy.AddMessage("Seconds %i using query" %int(time.time()-t0))

            t0 = time.time()
            mxd = arcpy.mapping.MapDocument("CURRENT")
            lyr = arcpy.mapping.ListLayers(mxd,infc)[0]
            N=65
            for i in xrange(0, len(OIDs), n):
            chunk=OIDs[i:i + n]
            lyr.setSelectionSet ("NEW",chunk)
            arcpy.CopyFeatures_management(lyr, "C:/scratch/scratch.gdb/%s"%chr(N))
            N+=1
            arcpy.AddMessage("Seconds %i using layer selection" %int(time.time()-t0))


            OUTPUT:



            Seconds 57 using query
            Seconds 34 using using layer selection


            It seems using setSelectionSet method on a layer coupled with Copy features works much faster than Select with query.



            Note: Populating integer field in this sample set took 21 second.






            share|improve this answer



























              1














              Creating and populating new field could significantly slow things down.



              This is why I tested 2 alternatives below on 150k long point dataset. They use similar approach - splitting the list of OBJECTIDs into equal size chunks:



              import arcpy, time
              from arcpy import env
              env.overwriteOutput = True

              infc="many_points"
              OIDs=[row[0] for row in arcpy.da.TableToNumPyArray(infc,"objectid")]
              n=len(OIDs)/10
              t0 = time.time()
              N=65
              for i in xrange(0, len(OIDs), n):
              chunk=OIDs[i:i + n]
              q='OBJECTID >=%i AND OBJECTID <=%i' %(chunk[0],chunk[-1])
              arcpy.Select_analysis(infc, "C:/scratch/scratch.gdb/%s"%chr(N), q)
              N+=1
              arcpy.AddMessage("Seconds %i using query" %int(time.time()-t0))

              t0 = time.time()
              mxd = arcpy.mapping.MapDocument("CURRENT")
              lyr = arcpy.mapping.ListLayers(mxd,infc)[0]
              N=65
              for i in xrange(0, len(OIDs), n):
              chunk=OIDs[i:i + n]
              lyr.setSelectionSet ("NEW",chunk)
              arcpy.CopyFeatures_management(lyr, "C:/scratch/scratch.gdb/%s"%chr(N))
              N+=1
              arcpy.AddMessage("Seconds %i using layer selection" %int(time.time()-t0))


              OUTPUT:



              Seconds 57 using query
              Seconds 34 using using layer selection


              It seems using setSelectionSet method on a layer coupled with Copy features works much faster than Select with query.



              Note: Populating integer field in this sample set took 21 second.






              share|improve this answer

























                1












                1








                1







                Creating and populating new field could significantly slow things down.



                This is why I tested 2 alternatives below on 150k long point dataset. They use similar approach - splitting the list of OBJECTIDs into equal size chunks:



                import arcpy, time
                from arcpy import env
                env.overwriteOutput = True

                infc="many_points"
                OIDs=[row[0] for row in arcpy.da.TableToNumPyArray(infc,"objectid")]
                n=len(OIDs)/10
                t0 = time.time()
                N=65
                for i in xrange(0, len(OIDs), n):
                chunk=OIDs[i:i + n]
                q='OBJECTID >=%i AND OBJECTID <=%i' %(chunk[0],chunk[-1])
                arcpy.Select_analysis(infc, "C:/scratch/scratch.gdb/%s"%chr(N), q)
                N+=1
                arcpy.AddMessage("Seconds %i using query" %int(time.time()-t0))

                t0 = time.time()
                mxd = arcpy.mapping.MapDocument("CURRENT")
                lyr = arcpy.mapping.ListLayers(mxd,infc)[0]
                N=65
                for i in xrange(0, len(OIDs), n):
                chunk=OIDs[i:i + n]
                lyr.setSelectionSet ("NEW",chunk)
                arcpy.CopyFeatures_management(lyr, "C:/scratch/scratch.gdb/%s"%chr(N))
                N+=1
                arcpy.AddMessage("Seconds %i using layer selection" %int(time.time()-t0))


                OUTPUT:



                Seconds 57 using query
                Seconds 34 using using layer selection


                It seems using setSelectionSet method on a layer coupled with Copy features works much faster than Select with query.



                Note: Populating integer field in this sample set took 21 second.






                share|improve this answer













                Creating and populating new field could significantly slow things down.



                This is why I tested 2 alternatives below on 150k long point dataset. They use similar approach - splitting the list of OBJECTIDs into equal size chunks:



                import arcpy, time
                from arcpy import env
                env.overwriteOutput = True

                infc="many_points"
                OIDs=[row[0] for row in arcpy.da.TableToNumPyArray(infc,"objectid")]
                n=len(OIDs)/10
                t0 = time.time()
                N=65
                for i in xrange(0, len(OIDs), n):
                chunk=OIDs[i:i + n]
                q='OBJECTID >=%i AND OBJECTID <=%i' %(chunk[0],chunk[-1])
                arcpy.Select_analysis(infc, "C:/scratch/scratch.gdb/%s"%chr(N), q)
                N+=1
                arcpy.AddMessage("Seconds %i using query" %int(time.time()-t0))

                t0 = time.time()
                mxd = arcpy.mapping.MapDocument("CURRENT")
                lyr = arcpy.mapping.ListLayers(mxd,infc)[0]
                N=65
                for i in xrange(0, len(OIDs), n):
                chunk=OIDs[i:i + n]
                lyr.setSelectionSet ("NEW",chunk)
                arcpy.CopyFeatures_management(lyr, "C:/scratch/scratch.gdb/%s"%chr(N))
                N+=1
                arcpy.AddMessage("Seconds %i using layer selection" %int(time.time()-t0))


                OUTPUT:



                Seconds 57 using query
                Seconds 34 using using layer selection


                It seems using setSelectionSet method on a layer coupled with Copy features works much faster than Select with query.



                Note: Populating integer field in this sample set took 21 second.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 8 at 9:09









                FelixIPFelixIP

                16.7k11642




                16.7k11642



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Geographic Information Systems Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f318065%2farcpy-select-analysis-choosing-only-a-fraction-of-the-record%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

                    NetworkManager fails with “Could not find source connection”Trouble connecting to VPN using network-manager, while command line worksHow can I be notified about state changes to a VPN adapterBacktrack 5 R3 - Refuses to connect to VPNFeed all traffic through OpenVPN for a specific network namespace onlyRun daemon on startup in Debian once openvpn connection establishedpfsense tcp connection between openvpn and lan is brokenInternet connection problem with web browsers onlyWhy does NetworkManager explicitly support tun/tap devices?Browser issues with VPNTwo IP addresses assigned to the same network card - OpenVPN issues?Cannot connect to WiFi with nmcli, although secrets are provided

                    대한민국 목차 국명 지리 역사 정치 국방 경제 사회 문화 국제 순위 관련 항목 각주 외부 링크 둘러보기 메뉴북위 37° 34′ 08″ 동경 126° 58′ 36″ / 북위 37.568889° 동경 126.976667°  / 37.568889; 126.976667ehThe Korean Repository문단을 편집문단을 편집추가해Clarkson PLC 사Report for Selected Countries and Subjects-Korea“Human Development Index and its components: P.198”“http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%EA%B5%AD%EA%B8%B0%EB%B2%95”"한국은 국제법상 한반도 유일 합법정부 아니다" - 오마이뉴스 모바일Report for Selected Countries and Subjects: South Korea격동의 역사와 함께한 조선일보 90년 : 조선일보 인수해 혁신시킨 신석우, 임시정부 때는 '대한민국' 국호(國號) 정해《우리가 몰랐던 우리 역사: 나라 이름의 비밀을 찾아가는 역사 여행》“남북 공식호칭 ‘남한’‘북한’으로 쓴다”“Corea 대 Korea, 누가 이긴 거야?”국내기후자료 - 한국[김대중 前 대통령 서거] 과감한 구조개혁 'DJ노믹스'로 최단기간 환란극복 :: 네이버 뉴스“이라크 "韓-쿠르드 유전개발 MOU 승인 안해"(종합)”“해외 우리국민 추방사례 43%가 일본”차기전차 K2'흑표'의 세계 최고 전력 분석, 쿠키뉴스 엄기영, 2007-03-02두산인프라, 헬기잡는 장갑차 'K21'...내년부터 공급, 고뉴스 이대준, 2008-10-30과거 내용 찾기mk 뉴스 - 구매력 기준으로 보면 한국 1인당 소득 3만弗과거 내용 찾기"The N-11: More Than an Acronym"Archived조선일보 최우석, 2008-11-01Global 500 2008: Countries - South Korea“몇년째 '시한폭탄'... 가계부채, 올해는 터질까”가구당 부채 5000만원 처음 넘어서“‘빚’으로 내몰리는 사회.. 위기의 가계대출”“[경제365] 공공부문 부채 급증…800조 육박”“"소득 양극화 다소 완화...불평등은 여전"”“공정사회·공생발전 한참 멀었네”iSuppli,08年2QのDRAMシェア・ランキングを発表(08/8/11)South Korea dominates shipbuilding industry | Stock Market News & Stocks to Watch from StraightStocks한국 자동차 생산, 3년 연속 세계 5위자동차수출 '현대-삼성 웃고 기아-대우-쌍용은 울고' 과거 내용 찾기동반성장위 창립 1주년 맞아Archived"중기적합 3개업종 합의 무시한 채 선정"李대통령, 사업 무분별 확장 소상공인 생계 위협 질타삼성-LG, 서민업종인 빵·분식사업 잇따라 철수상생은 뒷전…SSM ‘몸집 불리기’ 혈안Archived“경부고속도에 '아시안하이웨이' 표지판”'철의 실크로드' 앞서 '말(言)의 실크로드'부터, 프레시안 정창현, 2008-10-01“'서울 지하철은 안전한가?'”“서울시 “올해 안에 모든 지하철역 스크린도어 설치””“부산지하철 1,2호선 승강장 안전펜스 설치 완료”“전교조, 정부 노조 통계서 처음 빠져”“[Weekly BIZ] 도요타 '제로 이사회'가 리콜 사태 불러들였다”“S Korea slams high tuition costs”““정치가 여론 양극화 부채질… 합리주의 절실””“〈"`촛불집회'는 민주주의의 질적 변화 상징"〉”““촛불집회가 민주주의 왜곡 초래””“국민 65%, "한국 노사관계 대립적"”“한국 국가경쟁력 27위‥노사관계 '꼴찌'”“제대로 형성되지 않은 대한민국 이념지형”“[신년기획-갈등의 시대] 갈등지수 OECD 4위…사회적 손실 GDP 27% 무려 300조”“2012 총선-대선의 키워드는 '국민과 소통'”“한국 삶의 질 27위, 2000년과 2008년 연속 하위권 머물러”“[해피 코리아] 행복점수 68점…해외 평가선 '낙제점'”“한국 어린이·청소년 행복지수 3년 연속 OECD ‘꼴찌’”“한국 이혼율 OECD중 8위”“[통계청] 한국 이혼율 OECD 4위”“오피니언 [이렇게 생각한다] `부부의 날` 에 돌아본 이혼율 1위 한국”“Suicide Rates by Country, Global Health Observatory Data Repository.”“1. 또 다른 차별”“오피니언 [편집자에게] '왕따'와 '패거리 정치' 심리는 닮은꼴”“[미래한국리포트] 무한경쟁에 빠진 대한민국”“대학생 98% "외모가 경쟁력이라는 말 동의"”“특급호텔 웨딩·200만원대 유모차… "남보다 더…" 호화病, 고질병 됐다”“[스트레스 공화국] ① 경쟁사회, 스트레스 쌓인다”““매일 30여명 자살 한국, 의사보다 무속인에…””“"자살 부르는 '우울증', 환자 중 85% 치료 안 받아"”“정신병원을 가다”“대한민국도 ‘묻지마 범죄’,안전지대 아니다”“유엔 "학생 '성적 지향'에 따른 차별 금지하라"”“유엔아동권리위원회 보고서 및 번역본 원문”“고졸 성공스토리 담은 '제빵왕 김탁구' 드라마 나온다”“‘빛 좋은 개살구’ 고졸 취업…실습 대신 착취”원본 문서“정신건강, 사회적 편견부터 고쳐드립니다”‘소통’과 ‘행복’에 목 마른 사회가 잠들어 있던 ‘심리학’ 깨웠다“[포토] 사유리-곽금주 교수의 유쾌한 심리상담”“"올해 한국인 평균 영화관람횟수 세계 1위"(종합)”“[게임연중기획] 게임은 문화다-여가활동 1순위 게임”“영화속 ‘영어 지상주의’ …“왠지 씁쓸한데””“2월 `신문 부수 인증기관` 지정..방송법 후속작업”“무료신문 성장동력 ‘차별성’과 ‘갈등해소’”대한민국 국회 법률지식정보시스템"Pew Research Center's Religion & Public Life Project: South Korea"“amp;vwcd=MT_ZTITLE&path=인구·가구%20>%20인구총조사%20>%20인구부문%20>%20 총조사인구(2005)%20>%20전수부문&oper_YN=Y&item=&keyword=종교별%20인구& amp;lang_mode=kor&list_id= 2005년 통계청 인구 총조사”원본 문서“한국인이 좋아하는 취미와 운동 (2004-2009)”“한국인이 좋아하는 취미와 운동 (2004-2014)”Archived“한국, `부분적 언론자유국' 강등〈프리덤하우스〉”“국경없는기자회 "한국, 인터넷감시 대상국"”“한국, 조선산업 1위 유지(S. Korea Stays Top Shipbuilding Nation) RZD-Partner Portal”원본 문서“한국, 4년 만에 ‘선박건조 1위’”“옛 마산시,인터넷속도 세계 1위”“"한국 초고속 인터넷망 세계1위"”“인터넷·휴대폰 요금, 외국보다 훨씬 비싸”“한국 관세행정 6년 연속 세계 '1위'”“한국 교통사고 사망자 수 OECD 회원국 중 2위”“결핵 후진국' 한국, 환자가 급증한 이유는”“수술은 신중해야… 자칫하면 생명 위협”대한민국분류대한민국의 지도대한민국 정부대표 다국어포털대한민국 전자정부대한민국 국회한국방송공사about korea and information korea브리태니커 백과사전(한국편)론리플래닛의 정보(한국편)CIA의 세계 정보(한국편)마리암 부디아 (Mariam Budia),『한국: 하늘이 내린 한 폭의 그림』, 서울: 트랜스라틴 19호 (2012년 3월)대한민국ehehehehehehehehehehehehehehWorldCat132441370n791268020000 0001 2308 81034078029-6026373548cb11863345f(데이터)00573706ge128495